text
stringlengths 1
6.27k
| id
int64 0
3.07M
| raw_id
stringlengths 2
9
| shard_id
int64 0
0
| num_shards
int64 16
16
|
---|---|---|---|---|
C C C α δ , having the objective function: SEBM-PSO is similar to EBM-PSO, but instead of one optimization problem, four separated optimization problems are formed. Simulation results A 6DOF simulation model, based on 6DOF equations of motion in Eq. (2)-Eq. (7), is built for generating the needed flight test data. In this model, thrust, mass, inertia and vehicle's dimensions are obtained from experimental tests, and the aerodynamic model is obtained from CFD and Missile DATCOM softwares. As stated before, the control actuation system is a one pair ON-OFF. Therefore, an actuation signal δ(t) = sin(2pt) + Asin(pt+φ) is applied to the vehicle. Where, Asin(pt+φ) is input signal and sin(2pt) is the linearization signal (Nobahari & Mohammad Karimi, 2011). The model outputs are: accelerations Because of noisy nature of test sensore, in order to simulating real sensor outputs, a rational noise is also added to simulated data. Therefore, here, a uniformly distributed random noise is added to the original signal (sig m ), obtained from simulation (Tieying, Jie, & Kewei, 2015); (1 (2 1)) m m where, ξ determines noise to signal ratio, and rand stand for a uniformly distributed real number between [0, 1]. The noise effects are added according to Eq. (43), and adding noise to the data obtained from simulation will make the simulation data more close and equivalent to the real experimental data. Noise effect is considered by setting ξ = 0.05 for acceleration and angular rate data. Sampling rate of measurement signal is set to be 500 | 100 | 218815881 | 0 | 16 |
Hz, and the simulation data are saved during interval time 5-6 seconds from flight regime. The selected flight regime is a high speed regime with limited Mach number variations (1.68-1.7 Mach). So, the aerodynamic coefficients are supposed to be constant. As we know, in high Reynolds numbers the viscosity effect are small. In other words, during this nearly constant speed flight condition, effect of aerodynamic phenomenon like viscosity, Mach numbers and Reynolds number are negligible. As the current research is focused on the estimation algorithms, using this model will not affect the efficiency of the proposed methods. Even it can be used as a benchmark to validate the proposed algorithms. At last, simulation outputs is saved and used as the measurement data. All routines run on a PC with a 2.0 GHZ CPU and 4.0 GB of RAM. APE with EAM-PSO Estimation model of Eq. (23) is used in EAM-PSO. Figure 4 shows the EAM-PSO objective function convergence behavior during 100 iterations, for a single run. APE with EBM-PSO and SEBM-PSO Eq. (28) Table 3 summarizes the EAM-PSO results. It shows that the average value of estimation error is about 5.1%. The best estimated coefficient is C lp with 1.2% error, and the worst is C x with 9.3% error. On average, each run of EAM-PSO with 100 iterations takes 92 minutes run time. As can be seen, applying smoothing filter has deleted high frequency noises. Then second step of EBM-PSO have been done for both EBM-PSO and SEBM-PSO. The estimated and nominal values of aerodynamic | 101 | 218815881 | 0 | 16 |
coefficients C x , C zq , C lp and C mα , for a single run, are compared in Figure 13 to Figure 16. Summarized results of a single run are presented in Table 4. For EBM-PSO, the average value of estimation error is 2.4%, the best estimation result is for C lp with 0.5%, and the worst result is the estimation of C zα with 4.6% error. SEBM-PSO results shows that the average value of estimation error is 2.2%, the best estimation is for C l0 with 0.2% error, and the worst result is the estimation of C mq with 5.5% error. As shown, SEBM-PSO gives more accurate results than EBM-PSO because of solving separated optimization problems. Here, in addition to three newly developed algorithms, an EKF algorithm is also implemented for the problem at hand. The EKF state vector is X = [u, v, w, p, q, r, φ, θ, ψ], its unknown parameters vector to be estimated is Θ= [C x ,C zα ,C zδ ,C zq , C l0 ,C lp , C mα ,C mδ ,C mq ] and measurement vector is Y=[a X1 , a Y1 , a Z1 , a X2 , a Y2 , a Z2 , p , q , r]. The results of these four algorithms, for 100 successive runs, are compared and referred to nominal values calculated by CFD and Missile Datcom simulations and proposed in Table 7. It can be seen that the most accurate estimations are obtained by SEBM-PSO. The average estimation | 102 | 218815881 | 0 | 16 |
error for SEBM-PSO is 2.2%, for EBM-PSO is 2.4%, for EAM-PSO is 5.1% and for EKF is 8.4%. Three proposed algorithms give better estimations rather than EKF. However EKF gives a better estimation for C lp . In term of run time, EKF has the best performance with average 180 seconds. After that, EBM-PSO and SEBM-PSO have a comparable computational run time, while EAM-PSO has a high run time (92 min). Both EBM-PSO and SEBM-PSO algorithms have considerably better run-time in comparison with EBM-PSO, because the numerical integration process is deleted in EBM strategy. Best value for each aerodynamic parameter is highlighted in Table 5. SEBM-PSO gives best accuracy for C x ,C zα , C zδ , C l0 , C mα and C mδ .The best value for C zq is given by EAM-PSO. The best estimation for C mq is given by EBM-PSO. The best estimation error for C lp is obtained by EKF algorithm. Measurement noise effect on EBM-PSO performance The noise amplitude effect on EBM-PSO performance is studied here by performing 100 successive runs with three different noise magnitudes. The results are shown in Table 6. As was expected, increasing the noise amplitude causes the estimation accuracy to decreases. In another simulation, the effect of noise magnitude when smoothing filter is deleted from EBM-PSO is evaluated. The average values of estimated aerodynamic coefficients resulted by 100 successive runs are presented in Table 7. By comparing results of Table 6 it can be seen that Savitzky-Golay smoothing filter help the algorithm to | 103 | 218815881 | 0 | 16 |
decrease noise effects. For example when ξ = 0.05, using smoothing filter enhances the average estimated error from 6.2% to 3.5%. Conclusions In this article, based on a modified version of particle swarm optimization algorithm, three heuristic estimation algorithms are proposed to perform aerodynamic parameter estimation of a typical rolling airframe. It is shown that EBM-PSO uses an extra accelerometer in practice and the measurement unit needs to be fixed far from C.G. position. It was shown that EBM-PSO is more rapid in run time due to canceling the time consuming numerical integration procedure which is needed in EAM-PSO algorithms. SEBM-PSO algorithm provides more exact results by separating the estimation problem to a set of low-dimensional optimization problems. All aerodynamic coefficients may be estimated by proposed algorithms at once. Comparing the proposed algorithms performance with that of EKF shows its more exact results, while having comparable run time in EBM-PSO and SEBM-PSO algorithms. The evaluation studies show that aerodynamic parameter estimation accuracy is affected by measurement noise, and applying of Savitzky-Golay smoothing filter is very useful for eliminating the noise effect and enhancing the estimation accuracy. The simulation results revealed that the proposed methods can be used in practical applications. The performed work may be extended to involve heuristic real-time estimation algorithms that consider aerodynamic coefficients variations with flight parameters like Mach number. | 104 | 218815881 | 0 | 16 |
Capture and Separation of SO2 Traces in Metal–Organic Frameworks via Pre‐Synthetic Pore Environment Tailoring by Methyl Groups Abstract Herein, we report a pre‐synthetic pore environment design strategy to achieve stable methyl‐functionalized metal–organic frameworks (MOFs) for preferential SO2 binding and thus enhanced low (partial) pressure SO2 adsorption and SO2/CO2 separation. The enhanced sorption performance is for the first time attributed to an optimal pore size by increasing methyl group densities at the benzenedicarboxylate linker in [Ni2(BDC‐X)2DABCO] (BDC‐X=mono‐, di‐, and tetramethyl‐1,4‐benzenedicarboxylate/terephthalate; DABCO=1,4‐diazabicyclo[2,2,2]octane). Monte Carlo simulations and first‐principles density functional theory (DFT) calculations demonstrate the key role of methyl groups within the pore surface on the preferential SO2 affinity over the parent MOF. The SO2 separation potential by methyl‐functionalized MOFs has been validated by gas sorption isotherms, ideal adsorbed solution theory calculations, simulated and experimental breakthrough curves, and DFT calculations. . Section of the framework structure of DMOF (a-b) and DMOF-TM (c-d) obtained from the X-ray crystal structure data (CCDC numbers for DMOF and DMOF-TM are 802892 and 1879219). 5,6 Hydrogen atoms are omitted for clarity. It is noted that the stucture of as-synthesized DMOF is slightly different to that of activated DMOF, which has been clarified by Stock et al. 5 Herein, the structure of DMOF is given as the activated one. Figure S2. Section of the layer formed by the dinuclear Ni2 paddlewheel unit and the BDC/BDC-TM linkers in DMOF (a) and DMOF-TM (b). The aryl ring planes in DMOF and DMOF-TM are highlighted by green and blue ribbons. For DMOF, also the carboxyl -CO2 groups are | 105 | 235451219 | 0 | 16 |
in the plane of the aryl rings such that the dihedral angle O1-C1-C2-C3 is 0°. For DMOF-TM, the carboxyl groups are rotated out of the plane of the tetramethylphenyl rings, yielding a dihedral angle O1-C1-C2-C3 of 36°. Figure S3. Powder X-ray diffractogram (PXRD) patterns of DMOF and DMOF-X series. The simulated PXRD pattern of DMOF and DMOF-TM was obtained from the X-ray crystal structure data (CCDC: 802892 and 1879219). 5 Figure S12. (a) Ar adsorption-desorption isotherms of DMOF and DMOF-X at 87 K (filled symbols adsorption; empty symbols desorption). For DMOF-DM, there is a two-step adsorption isotherm with an H2 hysteresis upon desorption. (b) Pore size distribution of DMOF and DMOF-X determined from Ar sorption at 87 K. The pore size distribution was calculated with QSDFT calculations using the "Ar at 87 K carbon QSDFT, slit pore, QSDFT equilibrium" model. Table S6. Virial analysis of adsorption isotherms To calculate the isosteric enthalpy of adsorption (-∆Hads) from the gas isotherm data, the virial method was used. Equation (1) was used to fit the adsorption isotherms simultaneously at 273 and 293 K in the program Origin. 2 In equation (1), P is the pressure in kPa, n is the total amount adsorbed in mmol g -1 , T is the temperature in K (here 273 K or 293 K), ai and bi are virial coefficients, and n and m represent the number of coefficients required to adequately fit the isotherms. Then -∆Hads can be calculated from equation (2), where R is the universal gas constant. (2) S5.2 IAST | 106 | 235451219 | 0 | 16 |
Selectivity Selectivity of SO2 over CO2, CH4 or N2 of DMOF-X was calculated from Dual site Langmuir Sips, DSLAISips (eq. 3) fitted isotherm data, for which the best fit of the isotherms could be obtained. Where qeq is the amount if adsorbed (mmol/g), qmax is the maximal loading (mmol/g), K1 and K2 are the affinity constants 1 and 2 for adsorption (1/bar), p is the pressure (kPa) and t is the index of the heterogeneity. The 3P sim software (3P Instruments, Germany, version 1.1.0.7) calculates the maximal loadings of each gas depending on the given mole fraction. The selectivity S of binary gas mixtures was calculated using equation (4), where xi represents the absorbed gas amount and yi the mole fraction of each adsorptive. IAST with DSLAISips isotherm mode was chosen, and the total pressure was fixed at 1 bar to give the selectivity versus the SO2 molar fractions between 0.01 and 0.5 bar in gas mixtures. Alternatively, the SO2 to CH4 (or SO2 to N2) volume ratio was fixed to give the selectivity vs pressure between 0.1 to 1.0 bar based on eq. (4). Figure S23. SO2, CO2, CH4 and N2 adsorption isotherms of DMOF and DMOF-X at 293 K. S6. Stability of crystallinity and porosity after dry and humid SO2 exposure Experimental details: For the dry exposure, a SO2 isotherm was measured. For humid SO2 exposure experiment, we used a similar setup ( Figure S26) similar to Walton et al. 3 A controlled air flow of 2 L min -1 was bubbled through a | 107 | 235451219 | 0 | 16 |
sodium hydrogen sulfite solution (0.4 g Na2S2O5 in 100 mL water) in a Schlenk round bottom flask to transport gaseous SO2 into a humidity chamber (a desiccator vessel). The desiccator was equipped with a crystallizing dish filled with saturated sodium chloride solution (80 mL, relative humidity (RH) 75%) and an open vial filled with the sample (50 mg). The RH and the amount of SO2 in the desiccator was monitored with a hygrometer (VWR TH300 hygrometer) and an SO2-sensor (Dräger Pac 6000 electrochemical sensor), respectively. DMOF and DMOF-X were exposed to a humid SO2 environment at room temperature with 75 ± 6% RH and 35 ± 5 ppm SO2 for 6 h. The crystallinity (PXRD) and porosity (BET surface area and total pore volume) before and after dry and humid SO2 exposure on all DMOFs were measured as given below in Fig. S27-S31. S7.1 Breakthrough simulation Breakthrough simulations were done using the '3P sim' software with calculations which are based on a 30 cm high column with an inner diameter of 3 cm, axial dispersion of 50 cm 2 min -1 and a continuous gas flow of 20 mL min -1 . Generally mass transfer coefficients in our dispersion model were set to 10 min -1 for all gases. S7.2 Breakthrough Experiments Breakthrough curves were determined with an inhouse apparatus. The adsorbing unit consists of a stainless-steel column, which holds the adsorbent sample. The column is a ¼" Swagelok ® -type tube with a length of max. 10 cm. The dosing unit consist of three different | 108 | 235451219 | 0 | 16 |
mass flow controllers (Bronkhorst High-Tech B.V., Netherlands, EL-Flow Prestige Series, max. flow 50 mLN min -1 ), each for the carrier gas nitrogen (Air Products, purity 5.2, 99.9992%), CO2 (Air Products, purity 4.5, 99.995%) and a testing gas mixture of SO2 in N2 (Linde, 5 Vol.-% SO2). The adsorbing unit can be tempered by thermostat (Julabo GmbH, Germany, Series F25) or by an electrical heating system. The gas phase could be analyzed in a bypass unit after the exit of the adsorber by an on-line mass selective detector (Pfeiffer Vacuum GmbH, Germany, Prisma QMS 200). Around 75 mg of the sample was embedded in glass wool and filled into the adsorber tube equipped with a stainless-steel seal. The adsorbent material was pretreated at ~400 K for 12 hours. The sample was purged with 16.6 mLN min -1 of the carrier gas nitrogen during the pretreatment procedure and during each regeneration step for the removal of pre-adsorbed fluids. After the pretreatment the adsorber column was cooled down to the measurement temperature. After regeneration, the adsorber was purged with the carrier gas nitrogen and the desired gas mixture was dosed into the system and the gas composition at the outlet of the adsorber was recorded. The measurements were finished after reaching steady state conditions with no significant change in gas composition and temperature. Between each breakthrough experiment the samples were regenerated in dry nitrogen at 20 °C again to ensure well-defined starting conditions for the next experiment. The sample mass was corrected from external TGA measurements at the | 109 | 235451219 | 0 | 16 |
same conditions. For both adsorbents, DMOF and DMOF-TM, the weight loss was around 1.1 mass percent, respectively. Such weight loss was used in order to quantify the correct mass of the adsorbent. Additionally, all breakthrough curves were corrected for each fluid by several blank runs to ensure high quality data sets Scheme1. Schematic of the apparatus used for breakthrogh curve measurements. The dosing unit consists of three mass flow controllers for three independent gas inlets. The carrier gas (gas inlet 3) could also be used for dosing different humidities to the adsorbing unit. S8. DFT calculations S8.1 Cluster model DFT-D calculations All DFT computations were performed with the Becke, three-parameter, Lee-Yang-Parr (B3LYP) functional in the Gaussian 16 package. 4 The DMOFs [Ni2(BDC or BDC-TM)2DABCO] are built from dinuclear paddle-wheel nickel units bridged by BDC or BDC-TM linkers to form 2D layers parallel to the ab-plane. These layers are pillared by DABCO linkers resulting in a 3D framework with channels along each axis. The two model systems ( Figure S40) were isolated from their respective crystal structure of DMOF and DMOF-TM, 5 , 6 and resemble the two different local pore surface environments (along the a-and b-axis as well as along the a-and c-axis). Each model was set as the initial configuration for geometry optimization with the B3LYP-D3 functional. 7 The double-ζ basis set LANL2DZ was used for Ni atoms, and the 6-311G** basis set was used for the other elements (C, H, O, N and S). Figure S40. The structure of [Ni2(BDC or BDC-TM)2DABCO] was divided | 110 | 235451219 | 0 | 16 |
into two model systems resembling the two different local pore surface environments. The two models in the left column correspond to DMOF, while the models in the right column represent DMOF-TM. Note that the methyl-substituents have a notable influence on the channel size and shape depending on the orientations of the phenylene units. The binding energies for SO2 or CO2 adsorption with different sites were evaluated on two model systems with the dispersion-corrected B3LYP-D3 functional. In geometry optimizations of MOF model systems with SO2 or CO2, the positions of the Ni atoms were fixed to those in the experimental structures. The binding energy (eq. 5) calculations were corrected for basis set superposition error (BSSE) using the counterpoise method. 8 The vibrational analysis which supplies the necessary thermochemistry data was performed at the same level of geometry optimization to calculate the theoretical adsorption enthalpies at 298 K. The adsorption enthalpy was calculated by subtracting the enthalpies of the MOF and gas phase adsorbate from the enthalpy of the MOF with gas. Figure S41 for the different binding sites in DMOF and Figure S42 for the different binding sites in DMOF-TM. b BE per gas molecule. S8.2 Periodic DFT calculations with Quantum Espresso The structures of DMOF, DMOF-M ( Figure S43a), DMOF-DM ( Figure S43b) and DMOF-TM were geometry-optimized with QuantumEspresso 9 using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) scheme. (The two optimized DMOF-M and -DM structures were only used for the Monte Carlo simulations in Section 9.) During optimization, cell constants were fixed to the experimentally obtained values. Atoms were | 111 | 235451219 | 0 | 16 |
described with ultrasoft Rappe-Rabe-Kaxiras-Joannopoulos(RRKJ)type pseudopotentials. Periodic plane-wave DFT computations were performed using the generalized gradient approximation (GGA) with Perdew-Burke-Enzerhof (PBE) exchange correlation and the Monkhorst packing scheme with a 2 x 2 x 2 k-point mesh. An energy cutoff of 70 Rydberg and a charge cutoff of 700 Rydberg was applied. To account for dispersion effects, the semi-empirical Grimme D3-correction scheme was adopted. 10 The two Ni atoms in the unit cell were treated as antiferromagnetic, with starting magnetizations of +0.7 and -0.7 μB and using unrestricted DFT. This strategy yielded the lowest total energies for all DMOFs and a final magnetization of +1.24 and -1.24 μB at the respective Ni centers. We performed DFT calculations of DMOF-TM with three, four and five adsorbed SO2 molecules to document the effect of increased SO2 loading on the flexibility of the structure. Figure S46 shows the fully relaxed, optimized DMOF-TM structure loading an increasing number (3)(4)(5) of SO2 molecules. Compared to the crystal structure parameters of DMOF-TM (CCDC number: 1879219; a = c = 10.8500 (7), b = 9.2239(6), α = β = γ = 90°), the optimized cell angles vary along with a slight deviation of cell vector lengths after loading SO2 molecules. In the optimized DMOF-TM with five SO2 molecules, the α-angle widens by ca 3°, while the respective β-and γ-vectors narrow by 16 and 3°. Simultaneously, we also performed partially relaxed optimizations of DMOF-TM with 3-5 SO2 molecules ( Figure S45), in which the cell parameters were fixed to the experimental structure values. Notably, the | 112 | 235451219 | 0 | 16 |
overall cell volume of the fully relaxed structures was smaller than those of partially relaxed ones. Consequently, less SO2 molecules can be adsorbed within a fully relaxed structure. Figure S45. The partially relaxed DFT optimized DMOF-TM with 3 ( Figure S45a), 4 ( Figure S45b) and 5 ( Figure S45c) SO2 molecules Figure S46. The fully relaxed DFT optimized DMOF-TM with 3 ( Figure S46a), 4 ( Figure S46b) and 5 ( Figure S46c) SO2 molecules as well as their optimized unit cell parameters. S9. Monte Carlo simulation of SO2 adsorption at low-pressures Atomic partial charges for the DFT-optimized structures of all DMOFs were generated with the REPEAT method 11 The trend of enhanced SO2 affinity by methyl-functionalized DMOF-X with increased density of methyl groups was well reproduced within the simulations, despite that the simulated isotherms slightly overestimated the uptake, the most for DMOF-DM ( Figure S48). The large structure degradation on DMOF after SO2 adsorption can probably be attributed to the lower SO2 adsorption in the experimental isotherm, compared to simulated isotherm ( Figure S48). For DMOF-M and -DM, some disorders in experimental structures might lead to the decrease of free volumes and accessible surface areas resulting in the low SO2 adsorption compared to perfectly ordered simulated DMOFs. Adsorption energies were obtained from the simulations by averaging over the computed energy differences between empty and SO2 loaded cell for each step of the production runs. The resulting graph is shown in Figure S49. Comparison with the experimental isosteric enthalpy of adsorption ( Figure 4, main | 113 | 235451219 | 0 | 16 |
text) reveals different curvature at low uptake and somewhat underestimated values for mono-and dimethyl substituted DMOF-X. These may stem from the rigidity of the simulation cell. The increase in adsorption energy with methyl substitution is, however, well reflected. S10. FT-IR spectra of SO2-adorbed DMOF and DMOF-TM Fourier transform infrared (FT-IR) spectra of SO2-adsorbed DMOF or -TM samples was recorded on a Bruker FT-IR Tensor 37 in attenuated total reflection (ATR) mode in the range of 600-4000 cm -1 . At first, the sample (~15 mg) was degassed at 393 K for 16 hours to completely remove any guest molecules. The degassed sample was dosed with SO2 up to 0.46 bar (350 Torr) and kept for 15 minutes at this pressure. The sample was backfilled with helium to 1 bar pressure. The tube with the SO2-adsorbed sample was immediately cooled to 77 K by being placed into a liquid nitrogen for a short time of ~5 min. For the FT-IR measurement the sample was removed from the tube and placed on the diamond ATR crystal. For comparison, the degassed sample without SO2 was also treated with the same conditions (that is the sample tube placed into liquid nitrogen before FT-IR measurement). To unravel the possible interactions of SO2 molecules with the DMOF-TM framework upon SO2 adsorption, a series of FT-IR spectra of a SO2-loaded DMOF-TM sample under the exposure to an ambient air atmosphere (1 -20 min) was recorded and compared with the pristine DMOF-TM spectra. Due to the high affinity of DMOF-TM to SO2 and the | 114 | 235451219 | 0 | 16 |
relatively slow kinetic of SO2 release, the typical adsorption peaks of SO2 could be clearly detected, as shown in Figure S52. The peaks at 1327 and 1140 cm -1 are the asymmetric and symmetric S=O stretching mode of SO2 molecules. [24] These peaks exhibit a large red-shift (∆ = -35 and -11 cm -1 ) compared to the free SO2 molecule (1362 and 1151 cm -1 [27] ). This indicates interactions between the SO2 molecules and DMOF-TM framework. Upon increasing air exposure time of the SO2-loaded DMOF-TM sample, the intensity of the SO2 peaks is gradually decreased and vanished after 20 min. Compared to the FT-IR spectra of pristine DMOF-TM, several vibrational peak changes occur upon SO2 adsorption, as shown in Figure S53. The peaks of 3000 and 2943 cm -1 in the spectra of the pristine DMOF-TM are due to the asymmetric stretching vibrations of -CH3 and -CH2-of BDC-TM and DABCO. After SO2 adsorption, a decrease of intensity of these peaks is observed with a slight blue-shift (∆ = -5 and -4 cm -1 ). Within an instrument error of 2 cm -1 we caution to overinterpret such values. Even though such a -CH2-vibration change is as seen in a previous in-situ IR study on DMOF loaded SO2, [1] where it was attributed to the interactions between -CH2-of DABCO linkers and SO2 molecules. DABCO -SO2 interactions were also seen in the DFT calculated strong SO2 binding sites on optimized DMOF-TM ( Figure 8). Also, the asymmetric and symmetric stretching vibrations of carboxylate, COOat 1593 and | 115 | 235451219 | 0 | 16 |
1442 cm -1 in pristine DMOF-TM may be interpreted as slightly blue-shifted by (∆ = -4 and -2 cm -1 to 1597 and 1444 cm -1 in SO2-loaded DMOF-TM. Within the instrument error of 2 cm -1 such values are nearly invariant. We refrain from assigning interactions between carboxylate groups of BDC-TM and SO2 molecules, also because such -COO --SO2 interactions do not show up in the DFT calculated strong SO2 binding sites on optimized DMOF-TM ( Figure 8). The phenyl C=C bending modes of BDC-TM with a blue-shift of ∆ = -3 cm -1 from 1539 cm -1 in pristine DMOF-TM to 1542 cm -1 upon SO2 adsorption is also essentially invariant within the experimental error of 2 cm -1 and is not assigned to any weak interactions between benzene rings of DMOF-TM and SO2 molecules. Prerequisites for successful localization of gas molecules in the pores of a MOF Describing the spatial distribution of small guest molecules ("localization") in the pores of a crystalline host by diffraction methods is, generally, an advanced task. It is crucial that: a) the guest molecules predominantly reside at preferred positions with high occupation factors and low thermal displacement factors (a reasonable possibility for narrow-pore MOFs at low temperatures); b) the crystallinity of the host is high; c) the diffraction data is of high quality (preferably high-resolution synchrotron radiation data or neutron scattering data for guest, composed of light elements, which have larger scattering factors). Single crystal data is much more informative compared to the PXRD data (3D vs 1D | 116 | 235451219 | 0 | 16 |
projection of the Ewald's sphere) and is highly preferred. We attempted to increase the chances for successful structural assessment by influencing the major factors in our control, namely maximizing the concentration of the guest molecules, performing low temperature measurements and attempting to improve the crystallinity of the samples (an account on those activities are given below). Unfortunately, we had no access to synchrotron or neutron sources, however the results, obtained by conventional means, suggest objective limits for the possibility to refine the data for the DMOF-(Me)n type materials. The limitations seem to be objective and are connected with the rotational freedom of the linkers. The discussion further focusses on the most interesting target for us, the DMOF-TM compound (n = 4). Optimization of crystallinity and single crystal size Single crystals or maximal quality were targeted. The standard synthesis yields contained crystals of sufficient size (elongated blocks, square cross-section with a side exceeding >10 μm for DMOF-TM), but of somewhat inferior quality. The optimizations were carried according to the next lines: a) DEF, DMA and DMF/MeOH solvent mixtures instead of pure DMF as the crystallization medium [non-successful] b) Lower Tmax and addition of a drop of HNO3. [modest improvement. See Figure S56 for the best results] c) Lowering the concentration of the reactants [no significant improvement] Degassing and the SO2 exchange strategies The direct degassing (T = 120 °C, 10 -5 mbar) turned out to be detrimental for the larger single crystals, while it did not affect the very small ones, at least optically. The initial PXRD | 117 | 235451219 | 0 | 16 |
studies did not show evident pattern deterioration suggesting milder degassing/exchange methods. The mildest degassing method attempted was the soaking of the crystals in MeOH for 2 days followed by supercritical drying (performed by Leica EM CPD300 Dryer, 50 cycles, T = 10-40 C cycling, pre-exchange by acetone). No large crystals of suitable quality for the single crystal XRD analysis were found. The obtained sample was used for PXRD measurement in a capillary. An alternative approach, which was tried, was a two-step exchange of a sample: MeOH at room temperature first, then with liquid SO2 at ~+4 C in a pressure tube (the reaction of SO2 with methanol is negligible and in the absence of a base, particularly at low temperature). This method has an inherent problem, that the complete removal of MeOH is not guaranteed, but on the other hand it is perhaps the mildest possible method for exchange, because it completely avoids the state of empty pores. Diffraction studies (single crystal XRD attempts and powder XRD) The crystals produced by the double exchange were optically nearly acceptable for the single crystal XRD studies ( Figure S56). The pressure tube with the crystals in liquid SO2, stored in a fridge at +4 C were cooled to ~-10-15 C using a salt-ice bath. The crystals were transferred to a precooled (~-10 C) low viscosity immersion oil and the crystal picking was performed in a cool N2 stream (of about -15 C, according to a thermocouple reading near the exhaust nozzle, Figure S54 and S55). Carrying out the | 118 | 235451219 | 0 | 16 |
transfer and crystal picking at low temperature aimed at the minimization of the SO2 dissolution in the oil. The crystals, picked from the oil, turned out to be poly-twins ( Figure S56) upon cell-measurements. Instead of a single crystal a small blob of crystals in oil were mounted and measured as a polycrystalline sample using Gandolfi scans, which is equivalent to an ordinary PXRD measurement. Following the failure of the single crystal measurements, our focus was shifted to PXRD-based structural analysis of the SO2 loaded samples. All the samples were measured using a Rigaku XtaLAB Synergy-S single crystal X-ray diffractometer, which is capable to perform measurements on both single crystals and microcrystalline samples in capillaries, using fine temperature control. In total, three types of measurements were done (together with a degassed comparison sample): a) Degassed sample in a thin-wall 'Mark-tube' capillary filled with SO2-gas; measurement at 100 K (the degassing was performed in a sample tube of the gas adsorption analyzer, and quickly flame sealed. Effective pressure ~1 bar at room temperature). b) Liquid SO2 exchanged polycrystals (MeOH, then liquid-SO2; see above) in cold oil; measurement at 100 K. c) Degassed sample under liquid SO2 in a pressurized thick wall capillary at -68 C, just above the melting point of SO2 [the measurement failed, due to high background and, possibly, failed degassing]. In terms of line widths, the best quality measurement was the one taken in the thin capillary under gaseous SO2 (green diffractogram in Figure S57). This measurement was used for the Le Bail and | 119 | 235451219 | 0 | 16 |
Rietveld fitting. The measurement taken in the thick wall capillary is, expectedly, of the worst quality due to the high background, generated by the glass containment (very broad peak centered at ~22) and was not analyzed further. The influence of the SO2 sorption is evident from the comparison of the patterns. The differences between the SO2-loaded-and the control sample become consistently stronger for the measurement taken in cold oil (in the latter case the concentration of the SO2 in pores is assumed to be higher). Analysis of the Le Bail and Rietveld fits of the PXRD data The Le Bail and Rietveld fits were performed by the Jana 2006 software. [19] The CCDC 1879219 structure of the DMOF-TM [6] was used as an initial model ( Figure S58). Only the cell parameters were refined, while the coordinates, thermal displacement factors and occupancy factors were fixed; the hydrogen atoms were not included, as their influence is minimal and the positions found by single crystal XRD are most often idealized. In all cases the measurement collected in the thin capillary (green diffractogram in Figure S57) was used, as it had the highest quality. Relevant refinement parameters: "fixed background", Pseudo-Voight peak shape function (GU, GW and LY parameters were refined, while the LX parameter was kept equal to zero to enhance the convergence), no sample shift was refined (capillary measurement; the automatic postmeasurement adjustment by the instrument software ensured the near ideal centering). The 2θ > 50 range, containing too weak peaks, were discarded. Due to the low resolution | 120 | 235451219 | 0 | 16 |
of the data and relatively low amount of information provided by the broad, often fully coalesced peaks, the Rietveld refinements were successfully performed by a "step-bystep parameter fixing" approach. The profile parameters were fixed early during the refinement. The Uiso of Ni was set at 0.02 and of the other atoms for 0.04, which is close to an average, without some outliers. The coordinates of the atoms in the framework were kept constant. The coordinates of the atoms, modelling the guest molecules, were added one by one, according to the decreasing electron density, and the coordinates were not refined further. The occupancies of the guest molecules expectedly showed strong correlation with the Uiso value, so only the occupancies were refined. Such method of refinement could introduce strong correlations, but the simultaneous refinement was not possible. Still, the F(obs)-F(calc) Fourier maps from the fitting of the fixed initial model give a straightforward and reliable general understanding of the guests's distribution, even if the (correlated) refinement of poor data cannot provide precise atom coordinates or occupancies. It is worth to be reminded here that the Le Bail fit, unlike the Rietveld fit, does not involve the information about the atoms, but depends on the cell dimensions and symmetry. The Le Bail fit demonstrates the adequacy of the cell/symmetry and provides the upper limit of the fit quality attainable by the 'full' Rietveld fit. Figure S58. Views on the known DMOF-TM structure, used as a starting model in the refinements (based on CCDC 1879219, [6] The discrepancy is mainly | 121 | 235451219 | 0 | 16 |
due to the not fully corrected peak asymmetry for the first two strongest peaks (100) and (001). Notably, the 2θ = ~39-44 region contains a range of weak peaks, which were poorly fit (possible impurity or minor structural differences incompatible with the given cell/symmetry combination). Figure S60. Rietveld fit, DMOF-TM without SO2 (Rp = 0.081; reasonably good fit for an unadjusted model). The discrepancy is mainly due to the not fully corrected peak asymmetry for the first two strongest peaks, (100) and (001); the Simpson's asymmetry correction was fixed, due to correlation with the profile parameters (due to fixing the Simpson's asymmetry the fitting of some weaker reflexes become better at the expense of the first two strongest peaks). 10.8500 (7) 9.2239(6) R1 = 0.039 (after SQUEEZE!) b) PXRD-Rietveld, activated, no SO2 (100 K) 10.8083 (5) 9.2426(6) Rp = 0.081 PXRD-Rietveld, activated, with SO2 (100 K) 10.7638(6) 9.1679(10) Rp = 0.12 a) Ref. [6]. b) SQUEEZE: the procedure for removal of solvent-guest contribution. a) b) c) The Rietveld and the Le Bail fits are shown in Figure S59-S63 (see the profile fitting R-factors in the captions; a short summary on the cell parameters and R-factors are given Table S4) The PXRD data for the activated sample (without SO2) could be fit well using the Le Bail fit, and reasonably well with the Rietveld fit (Rp = 8.1%). After the loading of SO2, the Le Bail fit of the respective data even improves slightly (which means that the symmetry is adequate and the cell dimensions could be | 122 | 235451219 | 0 | 16 |
fit well. Thereby the adsorption of SO2 did not harm the structure of the framework. The attempt of the Rietveld fitting with the "fixed" initial model gave a very poor fit, Rp =0.328, which is consistent with the non-accounting of the guest molecules in the pores. The respective F(obs)-F(calc) Fourier difference maps are shown in Figure S64. Figure S64. Difference Fourier maps, F(obs)-F(calc), interpretable as electron density maps, for the DMOF-TM structure (hydrogens were not included in the model). A symmetrically independent ¼-th projection along the z-axis is given (see Figure S58 for comparison). Each image represents a different z-slice (the respective "z-height" is given in the top-middle part of each image). The projection of the atoms along the z-atoms is added as an eye-guide. The Fourier difference electron density maps ( Figure S64) clearly point to the most probable locations of the guest molecules at x, y, z = 0, 0, 0-0.4 and 0.4, 0, 0.1. There is also a strong additional electron density roughly coinciding with the Ni-N bond (0.5,0.5, 0.2-0.4), which should be attributed to the imperfections of the model (it could be partially corrected by applying anisotropic displacement parameters, but the given data quality definitely does not allow additional parameters for refinement). The final Rietveld refinement with the modelled guest molecules converged with a Rp = 12.0%. While this value is beyond the conventional upper limit for a proper fit with Rp = 5-10%, it is still reasonably good, taking into account the poor data quality. However, one should be very accurate | 123 | 235451219 | 0 | 16 |
regarding the significance of the fitting results. The distribution of the electron densities is credible, but the precision of the found atomic coordinates and occupation factors is definitely low. The data is of low resolution both in terms of the available 2θ interval and in terms of broad profile peaks (with not very good fitting of the latter by simple models of peak asymmetry). The lack of independent profile information was evident during the fitting: the parameters should be fixed step-by-step, otherwise the refinement became unstable. That means a nearly inevitably introduced correlation between parameters. Yet, the crude approximation for the localization of at least two molecules could be regarded as significant. The located electron densities, modelled as S and O atoms are listed in Table S5. Higher electron densities were generally modelled as S atoms. O11 might be a mixed O/S site. While the occupancy is strongly correlated with the thermal displacement parameter, the former was refined and the latter was kept fixed. The strongest electron densities correspond to approximately 0.2-0.5 occupancies (relatively to the maximally allowed by the site symmetry) of S, O atoms, which is physically reasonable. The found localizations of O11 (center of the largest cavity), S1 (largest cavity) and S2 (close to Me groups) are not unexpected. The O13, corresponding to weaker density, disposed too close to the Me groups (which could be partially explained by non-accounting the H-atoms in the refinement). The d(S1-O11) = 2.48 Å distance is longer, while d(S1-O13) = 1.43 Å about equals the S-O bond length | 124 | 235451219 | 0 | 16 |
in SO2. Two O12 atoms contribute to a formal molecular arrangement with a distance of 1.60 Å between them. It is worth noting that the modelled guests are not straightforwardly representing the SO2 molecules with correct molecular geometries, but rather represent the most probable localizations. It could be concluded that at 100 K the SO2 molecules tend to localize within the largest void, mainly ordering along the z axis, i.e. in the (0,0,0-0.3) xyz-range, and in the vicinity of two methyl groups of the same ligand molecule at (~0, 0.38, 0.15). The cell images with the modelled guests are given in Figure S65. Conclusions on the structural assessment In the absence of access to a synchrotron-or neutron X-ray source, an attempt to use a state-ofthe-art X-ray diffractometer to localize the SO2 in the pores of the DMOF-TM was made. The almost successful attempt to obtain single-crystals, loaded with SO2 for single crystal XRD, was followed by powder XRD studies. Three different SO2-loaded samples were prepared. The highest quality PXRD results were obtained for the simplest case, namely a degassed sample under SO2 atmosphere in a capillary (while the highest concentration of SO2 was assumed for a sample, exchanged with liquid SO2 and measured in cold oil, yet the quality of the measurement was inferior). The pattern differences upon SO2 loading most clearly manifested themselves in the change of intensities of the 220 and 110 peaks. The Le Bail fitting of the PXRD data shows that the structure remains nearly intact upon the sorption of SO2 in | 125 | 235451219 | 0 | 16 |
terms of the cell size and symmetry. The Rietveld refinement (Rp = 12%, compared to Rp= 8.1% for the fitting of the data without SO2) using the "fixed" initial model and the step-by-step localization of the electron-densities gave the probable localization of the guest molecules. The composition of the structure could be crudely ascribed by a [Ni2(DABCO)(Me4BDC)2] ~5SO2 formula, based on the refinement of the occupancy factors. According to the structural analysis the guest molecules predominantly localize in the largest cavity along the z-axis in the range of x, y, z = 0, 0, 0-0.3 and in the vicinity of two methyl groups of the same Me4BDC ligand molecule at approximately x, y, z = 0, 0.38, 0.15. It is necessary to stress the very approximate character of the refined numeric parameters, like coordinates and occupancies (even though, the Fourier difference maps clearly indicate the presence of the guests and their approximate localizations). The broad peak profiles of the experimental patterns, which is partly an instrumental problem and partly the inherent property of the material, did not allow a fitting with high number of parameters. It was necessary to use step-bystep refinement of the modelled guest-atom coordinates and fix them to preserve the convergence. It was, for example, also impossible to reach convergence with refining occupancy and thermal displacement, Uiso, parameters for the modelled guest molecules (the Uiso value was kept constant and the occupancy was refined). It is worth to stress a few problems, associated with locating potentially disordered light guests in a structure | 126 | 235451219 | 0 | 16 |
like DMOF-TM by diffraction methods. -Disorder in framework structures limits the utility of diffraction methods for the localization of guest molecules. The DMOF-TM compound contains large disordered fragments (2-positional disorder for the Me4BDC-and 8-positional for the DABCO ligands, according to the known crystal structure of the material without SO2). The disorder inherently decreases the crystallinity and, hence, the potential quality of refinement. Moreover, the number of disordered components is not limited by evident energetic considerations. Hence, new disordered components might appear as a result of interaction with the guest molecules. We did not observe clear signs of additional disorder of the Me4BDC moiety, but the crystallinity of the material was not very good either (that is typical for MOF materials in general, but disorder might be an additional negative factor). The crystallinity could be higher for MOFs with minimal ligand disorder, like in NOTT-300. [26] However, even in NOTT-300 the SO2 localization could only be done by using an advanced method, namely inelastic neutron scattering. -The expected rotational freedom of the Me4BDC ligand in DMOF-TM at room temperature might be a factor, which enhances the high affinity to SO2 compared to rigid linkers under certain conditions. The adjustment of the relative positions of the methyl groups could optimize double weak-contact with "bridging" SO2 molecules. In other words, the same factor that might hinder the diffraction studies, might also enhance the SO2 adsorption in a certain midtemperature range (while we did not find evidences of additional disordered components using diffraction methods at 100 K, the situation should | 127 | 235451219 | 0 | 16 |
change at higher temperatures). -Low temperatures are preferable for meaningful application of diffraction methods, and the properties at room temperatures or above might be different. Lower temperatures mean not only smaller thermal displacement parameters and less disorder, but also higher chances of guest ordering. The restriction of the guest distribution by the cell symmetry is often a necessary simplification, which might be particularly far from the reality at elevated temperatures. Despite that diffraction methods might not be particularly efficient for DMOF-TM, the 1D rotational freedom of the ligands is a factor, which is interesting in the context of adsorptive properties. Diffraction methods, even of top-notch instrumental quality, might give somewhat limited insight into the localization of the guest molecules in the case of DMOF-TM due to inherent disorder of the ligands, which, in the same time, makes this material interesting. In such a case, diffraction methods should be combined with other analytical methods and computational assessment, which was attempted in this paper. | 128 | 235451219 | 0 | 16 |
Announcement of the 2016 Polymers Young Investigator Award Dear readers of Polymers, Finally, after an extensive voting period, we are proud to present the first winner of the Polymers Young Investigator Award to Dr. Luis M. Campos, who is an assistant professor at the Chemistry Department of Columbia University, USA.[...]. Dear readers of Polymers, Finally, after an extensive voting period, we are proud to present the first winner of the Polymers Young Investigator Award to: Dr. Luis M. Campos who is an assistant professor at the Chemistry Department of Columbia University, USA. He was selected by the evaluation committee of Polymers Investigator Award from 38 candidates who were proposed by at least two colleagues in their field of expertise. Fifteen of the candidates are working in the United States, 13 in Europe and 10 at universities in Asian countries. The proposed candidates represented a diverse range of fields in polymer science. Dr. Campos received his Ph.D. with Professor Miguel A. Garcia-Garibay and Professor Kendall N. Houk at University of California, Los Angeles (UCLA) in 2016 and did his postdoc training at the University of California, Santa Barbara (UCSB) under the supervision of Craig J. Hawker. At the age of 37, Dr. Campos has already achieved an extraordinary standing in the polymers community. His excellent work focuses on the design and application of polymeric materials, for example, solar cells and organic light emitting diodes; all topics of high societal and economic impact. His research has been featured in highly ranked journals such as the Nature family, Angewandte | 129 | 91186355 | 0 | 16 |
Chemie, and Journal of the American Chemical Society (JACS), to name a few. To date, he has co-authored over 60 articles and has received numerous awards, including the American Chemical Society (ACS) Arthur C. Cope Scholar Award, The Office of Naval Research (ONR) Young Investigator Award, The National Science Foundation (NSF) CAREER Dr. Luis M. Campos who is an assistant professor at the Chemistry Department of Columbia University, USA. He was selected by the evaluation committee of Polymers Investigator Award from 38 candidates who were proposed by at least two colleagues in their field of expertise. Fifteen of the candidates are working in the United States, 13 in Europe and 10 at universities in Asian countries. The proposed candidates represented a diverse range of fields in polymer science. Dr. Campos received his Ph.D. with Professor Miguel A. Garcia-Garibay and Professor Kendall N. Houk at University of California, Los Angeles (UCLA) in 2016 and did his postdoc training at the University of California, Santa Barbara (UCSB) under the supervision of Craig J. Hawker. At the age of 37, Dr. Campos has already achieved an extraordinary standing in the polymers community. His excellent work focuses on the design and application of polymeric materials, for example, solar cells and organic light emitting diodes; all topics of high societal and economic impact. His research has been featured in highly ranked journals such as the Nature family, Angewandte Chemie, and Journal of the American Chemical Society (JACS), to name a few. To date, he has co-authored over 60 articles and has | 130 | 91186355 | 0 | 16 |
received numerous awards, including the American Chemical Society (ACS) Arthur C. Cope Scholar Award, The Office of Naval Research (ONR) Young Investigator Award, The National Science Foundation (NSF) CAREER Award, 3M Non-Tenured Faculty Award, Cottrell Scholar Award, The Inter-American Photochemical Society (I-APS) Young Faculty Award, the Journal of Physical Organic Chemistry Award for Early Excellence. This finally led to invitations to numerous prominent lectures all over the world. Moreover, his work constitutes not only high level basic research, but has proven to be highly relevant in industry which is reflected in 10 patents filed by him and his coworkers. In addition to the cash prize and plaque, Dr. Campos will be an invited speaker at the 2018 Polymers conference. On behalf of the Polymers Editorial office staff and editorial board members, I wish to congratulate Dr. Campos on his excellent performance and wish him all the best for his future career. | 131 | 91186355 | 0 | 16 |
Advance Cryptography using Color Blocks : In the world of emerging technology, cryptography is used in Authentication/Digital Signatures, Time Stamping, Electronic Money, Secure Network Communications (Secure Socket Layer (SSL), Kerberos), Anonymous Remailers, Disk Encryption etc. In past years cryptology has evolved from secret art to modern science. Weaker algorithms and algorithms with short keys are disappearing, political controls of cryptography have been reduced, and secure cryptography is becoming more and more a commodity. Moreover, implementations are becoming more secure as well. Since, information processing by electronic devices leads to a multitude of security relevant challenges, we need to keep evolving new cryptographic methods or algorithms day by day. This paper is about the new cryptographic development and how to use it to get a more secure communication over networks. 2) Known Plain text: In this type a cryptanalyst has plaintext and their corresponding cipher text. Attacker tries to find out the relation between these two. 3) Chosen Cipher text: The attacker obtains the various plaintext corresponding to an arbitrary set of cipher text. 4) Chosen Plain text: The attacker obtains the various cipher text corresponding to an arbitrary set of plain text. 5) Adaptive Chosen Plain text: This is similar with the Chosen Plaintext, except in this attacker chooses subsequent set of plain text which is based on the information obtain from previous encryption methods. 6) Adaptive Chosen Cipher text: This is similar with the Chosen Cipher text, except in this attacker chooses subsequent set of cipher text which is based on the information obtain from | 132 | 203702071 | 0 | 16 |
previous encryption methods. 7) Related Key Attack: Like the chosen plaintext, attack in which attacker can obtain only cipher text encrypted with the help of two keys. These keys are unknown but the relationship between these keys is known. Example two keys differ by a single bit. There are several issues related to cryptographic algorithm such as space complexity, time complexity and its resistance to various types of attacks. In order to implement an effective cryptographic algorithm all these aspects need to be considered in order to make it robust. Let's discuss these issues:a) Time Complexity: It is the amount of time required to encrypt and decrypt the data. The algorithm should be designed in such a way that it should take as less time as possible for its execution. Time complexity plays an important role in modern cryptography as more and more systems are working in a real time environment nowadays. Hence while implementing a cryptographic algorithm it is necessary to consider its time complexity. b) Space Complexity: It is the amount of space consumed by cipher text as compared with plain text. As more and more mobile devices with limited connectivity in terms of data rate are being used nowadays, it is very essential to keep the size of cipher text being produced as small as possible as to deal with variable data rates. Thus, it is very important to device a way to reduce the size of cipher text as much as possible to increase data transmission efficiency. c) Security: The very purpose | 133 | 203702071 | 0 | 16 |
of cryptography is to secure the data being transmitted over the network from various types of attacks. The data being transmitted is always vulnerable to various types of attacks such as men in the middle attack, brute force attack etc. Thus, in order to prevent the data from being compromised it is necessary to protect the data from unauthorized users. The feasibility of cryptography must be tested against such attacks so as to secure the data being sent. Hence providing security is one of the major issues of cryptography III. ENCRYPTION IN YOUR DAILY LIFE A. SSL Certificates Browsing the internet is an activity that most of us do every day. On the internet, encryption comes in the form of Secure Sockets Layers (SSL) certificates. SSL protection is a security technology feature that website owners can buy in order to increase the security of their site. You can recognize an encryption protected website from the green padlock and the "HTTPS" in the URL. SSL protection establishes an encrypted communication channel between a browser and a web server. An active SSL certificate on a web server is especially useful on websites where visitors enter sensitive information such as credit card information, phone numbers, IDs, etc. That means that all the data that is being transferred between a browser and a web server is encrypted for security and privacy reasons. B. Cash Withdrawal from ATMs Banks use Hardware Security Module (HSM) encryption methods in order to protect your PIN and other banking information while the transaction is in | 134 | 203702071 | 0 | 16 |
transit in the network. HSM encryption comes in many different types but, in essence, it's function is to encrypt the 4 to 6 digit PIN of every person that uses the ATM. Then, the PIN is decrypted at the HSM side in order to execute and validate the transaction or money withdrawal. This encryption method ensures that hackers won't be able to get their hands on your PIN in case they intercept the network data in transit. C. Email Webmail applications such as Gmail and Hotmail provide the earlier explained SSL encryption (HTTPS) in order to protect the user. However, it's important to note that SSL encryption does not encrypt the text in emails. Thus, without going too deep into the technical jibber-jabber, the NSA for example, would still be able to intercept your emails in readable text format. Privacy-minded users are increasingly more often leaning towards end-to-end encryption email providers such as Proton mail and Counter Mail. Millions of users have already made the switch to similar encryption protected email providers. This email software ensures that every sent and received email is encrypted into ciphertext. So, even when the email is intercepted, it's unreadable to anyone without the decryption key. D. File Storage Popular file storage platforms such as Dropbox and Google Drive, with 500 million and 800 million users respectively, greatly emphasize on the security of the platform. Obviously, the platform wouldn't be used by millions of users -individuals and businesses -if it didn't provide a secure environment to store important files, photos and | 135 | 203702071 | 0 | 16 |
videos. That means that every file is encrypted into cipher data in order to protect the users. Dropbox even stated in their security protocol that they break every piece of data into multiple other pieces and encrypt these smaller pieces of data one by one. Both platforms protect files in transit between servers and apps, but also at rest (when it's stored on their server). Which is incredibly helpful for all these millions of users, to be sure all their important data is safely stored online. E. Messenger Apps (WhatsApp) According to TechCrunch, the popular messenger application WhatsApp had 1.5 billion active monthly users in Q4, 2017. Which is good for 60 billion messages sent per day. It comes to no surprise that WhatsApp values the privacy of its users, which is why WhatsApp implemented complete end-to-end encryption in their messenger application. That means that all your messages, photos, videos, voice messages and files are secured. Only the person you're communicating with is able to read what you're sending. End-to-end encryption also means that even WhatsApp is not able to read any messages, because it's stored on their server in encrypted format. And the best thing is that WhatsApp automatically encrypts every message by default and there's no way to turn off the encryption. [2] IV. METHODOLOGY Using cryptographic algorithms applied on a set of input and then converting the obtained cipher text into colour block, a system is developed which need input of plain text and a pin. At this moment no validation is put | 136 | 203702071 | 0 | 16 |
on any of the input. The pin is to be used at the time of reversing the output of color block to plain text at the time of deciphering. Basic programming language is also used in this research work. And website is developed for demo of the work. A. Algorithms Used 1) Transposition of Rail Fence Cipher: The rail fence cipher (also called a zigzag cipher) is a form of transposition cipher. It derives its name from the way in which it is encoded. In the rail fence cipher, the plain text is written downwards and diagonally on successive "rails" of an imaginary fence, then moving up when the bottom rail is reached. When the top rail is reached, the message is written downwards again until the whole plaintext is written out. The message is then read off in rows. 2) Substitution: In this process characters are converted to ascii character and then they are reverse. 3) AES Encryption/Decryption: The more popular and widely adopted symmetric encryption algorithm likely to be encountered nowadays is the Advanced Encryption Standard (AES). It is found at least six time faster than triple DES. A replacement for DES was needed as its key size was too small. With increasing computing power, it was considered vulnerable against exhaustive key search attack. Triple DES was designed to overcome this drawback but it was found slow. The features of AES are as follows − a) Symmetric key symmetric block cipher b) 128-bit data, 128/192/256-bit keys c) Stronger and faster than Triple-DES d) Provide | 137 | 203702071 | 0 | 16 |
full specification and design details e) Software implementable in C and Java 4) Operation of AES: AES is based on 'substitution-permutation network'. It comprises of a series of linked operations, some of which involve replacing inputs by specific outputs (substitutions) and others involve shuffling bits around (permutations). Interestingly, AES performs all its computations on bytes rather than bits. Hence, AES treats the 128 bits of a plaintext block as 16 bytes. These 16 bytes are arranged in four columns and four rows for processing as a matrix. The number of rounds in AES is variable and depends on the length of the key. AES uses 10 rounds for 128-bit keys, 12 rounds for 192-bit keys and 14 rounds for 256-bit keys. Each of these rounds uses a different 128-bit round key, which is calculated from the original AES key. 5) Colour Code Conversion: The final input to this block is converted in color blocks. B. Programming Languages Used 1) Html: Hypertext Markup Language is the standard markup language for creating web pages and web applications. With Cascading Style Sheets and JavaScript, it forms a triad of cornerstone technologies for the World Wide Web. 2) CSS: Cascading Style Sheets is a style sheet language used for describing the presentation of a document written in a markup language like HTML. CSS is a cornerstone technology of the World Wide Web, alongside HTML and JavaScript 3) Javascript: JavaScript, often abbreviated as JS, is a high-level, interpreted programming language that conforms to the ECMAScript specification. JavaScript has curly-bracket syntax, dynamic | 138 | 203702071 | 0 | 16 |
typing, prototype-based object-orientation, and firstclass functions Flowchart for encryption and decryption D. Output In the above screenshots we can see the user tries to encrypt the plaintext into cipher colour block using pin code as the key. The input goes through different encryption one by one and then converted to color block output. User gets the above output which is the color block from a set of input which is text and numeric. The operations are performed on the above set of input "dimpal ramanuj" and "8238" pin number the generate the coloured output. E. Decryption Case 1: Wrong Pin If the pin entered does not match your encryption pin the popup box displays which shows that your pin does not match and renter correct pin. In the above example, user wants to decrypt the color code. So according to the principle of encryption decryption, we need a key. Here pin is the key. What if the user unknowingly enters wrong pin? The popup is displayed saying you have to enter pin correct. Case 2: Correct Pin In the above screenshot we see that the user gets back the plain text from the cipher colour block, if the pin he entered at the time of encryption is same as that at time of decryption. This above operation is performed as a combination of different types of encryption techniques which uses plaintext of one for others input and decryption is vice versa. Pin is used at the third stage when AES comes into picture. But during decryption | 139 | 203702071 | 0 | 16 |
it is needed at initial or start stage as the combination are also reversed. V. RESULTS In the above environment, we wish to show an encryption technique which uses different encryption/decryption techniques in all to get an output. The input is given to the above environment and output is a colour block. According to the above observation, we can see that the user is able to get a color block cipher in output which is generated from a set of input in the form of text or numerals. Also, there is a pin code associated with the whole process which is used as a key in our scenario. The user gives the plaintext and pin in the page we created using basic scripting and programming language and then we are able to create desired cipher coloured block of output. VI. CONCLUSION To compensate the need for the internet security we have to provide a complex cryptographic algorithm as proposed by this paper. In this thesis we have use a combination of different cryptographic or we say encryption techniques in one algorithm and developed an environment. Also, we have twisted the output by getting coloured cipher block instead of text and numerals. | 140 | 203702071 | 0 | 16 |
STUDY OF DENGUE FEVER IN SOUTH EASTERN RAJASTHAN : Dengue Fever has become a significant resurgent disease in the past 20 years all over the India. Our study outlines the clinical spectrum and prognosis of the disease beyond rural and urban confines. Study conducted on 350 patients admitted in SRG Hospital from the month April 2013 to Oct. 2013 about 7 months. All patients with the febrile illness positive for NS1 antigen, IgM and IgG/IgM antibody for Dengue virus were taken as case. The patients were subjected to clinical examination and baseline investigations so as to conduct study. The study was conducted to know the prevalence of dengue infection, based on laboratory rapid screening tests for NS1, IgM and IgM/IgG antibodies and to study the seasonal variation and the clinical profile in these cases. Dengue causes increased morbidity and mortality and requires prompt diagnosis and treatment for the proper management of these cases, the rapid screening test for NS1, IgM and IgM/IgG antibodies, platelet counts helps clinicians toward achieving this goal. The total number of patients were 350 of whom 180 were male and 170 were female. The most common presentation apart from fever, icterus, bodyache, rash, headache, gastrointestinal symptoms, haemorrhage and shock was present. Investigations revealed thrombocytopenia (<10, 000 in 35), (<25, 000 in 120), (<75, 000 in 155) and (75000/cumm-150000/cumm) normal platelet counts in 40 patients. Leucopenia in 168 cases (<3000) were detected and HB. Values were raised. Case fatality rate is 1.7% in this study. Age group analysis revealed that it affects younger | 141 | 75621748 | 0 | 16 |
age persons 21-30 years. A febrile patients had rash, myalgia and abdominal pain. INTRODUCTION: Dengue Fever has been identified as an emerging infectious disease in Rajasthan state. Sporadic occurrence of dengue fever cases has been reported in Jhalawar district. S. R. G. Hospital & Medical Collage, Jhalawar. Dengue viruses are mosquito-borne flavi viruses that plagued people for centuries. [1] Immunology of Dengue Fever if characterised by an initial viremic phase which corresponds to the first 3 days of illness. Followed by a critical immune phase spanning from 3 rd to 6 th day of illness. The phase of dengue beyond 6 th day of illness is called recovery phase. A sizable number of patients take longer to recover. The elucidation of the exact clinical profile is important for patient management and thus crucial for saving life. The present study is an attempt to describe the salient clinical as well as laboratory findings of serologically confirmed hospitalized cases of Dengue Fever during the period April, 2013 to Oct., 2013. Patient treated symptomatically. Platelet transfusion was given in 85 patients. MATERIAL AND METHODS: Present study conducted in patients admitted in medicine department SRG Hospital Jhalawar from April to Oct.2013. A total 350 cases of Dengue Fever were analysed during this period. The data was collected from Dengue Fever cases admitted through emergency or outpatient department, in a detailed proforma as per the history given by patient / attendant, with particular emphasis being given to age, sex, laboratory findings. All data were analysed, documented and interpreted as per the | 142 | 75621748 | 0 | 16 |
laid down protocol. All the patients with acute febrile illness underwent serology NS1, IgM, IgG/IgM with rapid kit test. As per WHO criteria dengue haemorrhagic fever defined as an acute febrile illness with minor or major bleeding, thermocytopenia, evidence of plasma leakage patients improved with fluids. Dengue shock syndrome is defined as dengue haemorrhagic fever with sings of circulatory failure, including narrow pulse pressure hypotension, frank shock. [2] NS1, IgM and IgG/IgM (secondary infection) dengue positive cases were included. These patients were admitted with Dengue fever, Dengue haemorrhagic fever, myalgia, headache, rash, hypotension or bleeding manifestations and shock syndrome. The diagnosis of Dengue Fever, Dengue haemorrhagic fever and dengue shock syndrome was based on clinical ground. [3] The patients were subjected to thorough clinical examination and laboratory investigation like that complete hemogram, urea, creatinine, liver function test, chest, X-ray, ECG and ultra sound of abdomen. All the patients admitted with dengue had duration of symptoms for 3-4 days and the platelet count was done at the time of admission. The diagnosis of pleural effusion was confirmed by X-chest. In ECG finding tachycardia was present in all the patients of shock and haemorrhage. In lever function test bilirubin was above 2mg% in 18 patients. Increased bilirubin levels may present clinically as jaundice. Jaundice is rarely present in dengue fever patients. No case of fulminant hepatic failure was noted in our study. RESULT: Analysis of patients was done by tabulation of data numbers and percentages were enumerated for all categorical variables such as clinical characteristics and biochemical tests. | 143 | 75621748 | 0 | 16 |
All adult patients of dengue fever admitted in medicine indoor during 7 month period from April to Oct. 2013 with confirmed diagnosis were selected for this study. NS1, IgM and IgG/IgM positive cases were included in this study. OBSERVATIONS The total no. of patients were 350 of whom 51.42% male and 48.50 female.the difference in prevalence was not statistically significant. The age of patients varied from 21 to 60 years. The maximum no. of patents belongs to 21-30 years (36.57%), followed by 31-40 years (30.50%). Cases positive for dengue NS1 antigen, IgM and IgG/IgM 120(34.28%), 130(37.14%), 100(28.57%). Serologycaly IgG and IgM was present in 49 cases of dengue haemorrhagic fever and 26 were positive in NS1. Only 2 patients of NS1 positive cases had IgG antibody was also positive. In dengue shock syndrome IgM and IgG antibody were present in all the 6 cases. Clinical Feature: Fever was documented in 280 patients, headache noted in 160 patients, Myalgia in 204 patients, Pain abdomen 105 patients and some of the patient had isolated one symptom or in some patients 2-3 symptoms together. Rash was present in 110 patients, in 27 patients Epistaxis was present. Bleeding manifestation as bleeding PV in 9 females. Gastrointestinal bleeding in 11 patients and Disseminated intra vascular coagulation was documented in one case. Ascitis was detected in 15 patients and pleural effusion present in 07 patients. DISCUSSION: The dengue is emerging as a serious public health problem globally. This may be due to climatic changes or due to failure to control to mosquito | 144 | 75621748 | 0 | 16 |
vector. [4] classical dengue fever was first reported from Egypt in 1779. [5] dengue haemorrhagic fever was first reported in India from Kolkata in 1963-64, 200 people died. [6] This study describes the clinical profile, laboratory investigation and outcome of dengue fever (77.1%), dengue haemorrhagic fever (21.4%) and dengue shock syndrome (1.7%). The incidence dengue haemorrhagic fever is higher in contrast to the observation 13.5% from Sharma et al . [7] in west Bengal nearly 61% of dengue cases reported between 2005 to 2007 were secondary infection. In our study it revels 28.5% of secondary infection. In this study gender distribution is equal that is 51.6% were male and 49.4% female. This is in contrast to the study of Medeira (Europe) 2012. [8] In which 41.1% male and 58.9% female. In our series predominant presentation was fever, bleeding of various degree and gastrointestinal symptom were not associtated with thrombocytopenia. Similar to study in Kerala [6] and Sharma et al. Hypotension recorded in 14 patients responded as in the study of Nandani Chatterjee to IV fluids. Laboratory investigation in our series shows thrombocytopenia, leucopenia and increase in Haemoglobin level. Therapy in most cases involved antipyretic and fluids and platelet transfusion was given in patients with platelet count below 25000/cumm. According to WHO mortality in untreated cases is 20% in our study. Case fatality rate is 1.7%. 6 patients died. One of the patient had severe gastrointestinal bleeding, and 5 patient died of dengue shock syndrome of which one patient developed disseminated intravascular coagulation defect. All the patients | 145 | 75621748 | 0 | 16 |
died of multi organ failure. Therefore to conclude, the incidence of Dengue Fever was predominantly affecting the younger age group. Mostly a febrile illness with myalgia, mild bleeding and gastrointestinal symptoms. Proper conformation of diagnosis, early institution of therapy, public awareness and vector control are important factors to be taken into consideration in order to form policies on dengue prevention and management. The incidence of dengue was predominantly affecting the younger age group in both these gender. Mostly febrile illness with myalgia, headache, abdominal pain, rash, mild bleeding and gastrointestinal symptoms. Mostly responding to conserative therapy. Proper conformation if diagnosis early institutional therapy, public awareness and vector control are important factor to be taken in consideration in order to form policies on dengue prevention and management. | 146 | 75621748 | 0 | 16 |
Silibinin Promotes Cell Proliferation Through Facilitating G1/S Transitions by Activating Drp1-Mediated Mitochondrial Fission in Cells Heart, liver, and kidney, which are known as the essential organs for metabolism, possess the unique ability to regulate the proliferation function of the body against injury. Silibinin (SB), a natural polyphenolic flavonoid extracted from traditional herb Silybum marianum L., has been used to protect hepatocytes. Whether SB can regulate mitochondrial fission in normal cells and the underlying mechanisms remain unclear. Here, we showed that SB markedly promoted cell proliferation by facilitating G1/S transition via activating dynamin-related protein 1 (Drp1), which in turn mediated mitochondrial fission in these normal cells. SB dose-dependently increased the mitochondrial mass, mtDNA copy number, cellular adenosine triphosphate production, mitochondrial membrane potential, and reactive oxygen species in normal cells. Furthermore, SB dose-dependently increased the expression of Drp1. Blocking Drp1 abolished SB-induced mitochondrial fission. In conclusion, we demonstrate that SB promotes cell proliferation through facilitating G1/S transition by activating Drp1-mediated mitochondrial fission. This study suggests that SB is a potentially useful herbal derivative for the daily prevention of various diseases caused by impaired mitochondrial fission. Mitochondria are essential eukaryotic organelles that provide energy for the majority of processes including metabolism, cell cycle progression, differentiation, immune responses, and apoptotic cell death 6,7 . Under physiological conditions, the mitochondrial network emerges highly dynamic modulating bioenergetics, such as reactive oxygen species (ROS) generation, cell proliferation, and death 8,9 . Dysfunction in mitochondrial dynamics results in impaired adenosine triphosphate (ATP) synthesis, decreased mitochondrial membrane potential (MMP), mitochondrial DNA (mtDNA) mutation, and excessive | 147 | 221281003 | 0 | 16 |
ROS production 10 , which causes various diseases, including cardiovascular diseases 11 , kidney diseases 12 , metabolic diseases 13 , and cancer 14 . Mitochondrial fission is essential for maintaining the mitochondrial network. Dynamin-related protein 1 (Drp1), a large dynamic-related cytosolic GTPase, is recruited to mitochondrial outer membrane and forms as active GTP-dependent mitochondrial fission sites during fission 15 . It has been reported that dysfunctional Drp1 can disrupt mitochondrial homeostasis and lead to cell death 16 . The restoration of Drp1-mediated mitochondrial fission might be a mechanism underlying SB protecting against cardiac, hepatic, or nephritic diseases. This hypothesis has not been fully validated. In this study, we used cardiomyocyte, hepatocyte, and renal tubular epithelial cell models to demonstrate that SB can increase mitochondrial form and function by restoring Drp1-mediated mitochondrial fission. Cell Viability and Cell Growth Assay The effects of SB (Chengdu Must Bio-Technology Co., Ltd., Chengdu, China, purity of SB is 98.89% identified in Chengdu Must Bio-Technology by HPLC) on cell viability were determined using 3-(4,5-dimethylthiazol-2-yl)-2,5diphenylterazolium bromide (MTT). LO2 (3 Â 10 3 cells/ well) cells, AC16 (3 Â 10 3 cells/well) cells, and HK2 (5 Â 10 3 cells/well) cells were seeded onto 96-well microplate and cultured for 24 h and then treated with SB at indicated concentrations for indicated periods (24,48, and 72 h). The cellular viability was assessed using MTT assays and was expressed as a ratio to the absorbance value at 570 nm of the control cells by a microplate reader (Multiskan FC, Thermo Fisher Scientific, Inc., Waltham, MA, | 148 | 221281003 | 0 | 16 |
USA). Colony Formation Assay LO2 (500 cells/well) cells, AC16 (500 cells/well) cells, and HK2 (500 cells/well) cells were seeded onto six-well plates and treated with SB (0, 12.5, 25, and 50 mM/l) for 24 h. Then, cells were washed with phosphate-buffered saline (PBS) and cultured in fresh medium for 15 days. After incubation, cells were fixed in 75% alcohol at 4 C overnight and stained with crystal violet dye for 30 min. Abstractfigure. SB promotes G1/S transition in the cell cycle in cells through the mitochondrial fission dynamic pathway mediated by Drp1 in vitro. SB increased the expression of Drp1 in human AC16 cardiomyocytes, LO2 hepatocytes, and human proximal tubular epithelial HK2 cells, which lead to mitochondrial excessive fission and cell proliferation, thereby promotes G1/S transition in the cell cycle and increasing the expressions of CDK2 and cyclin E1. Blocking Drp1 inactivates mitochondrial fission in cells, decreases the G1/S transition in the cell cycle, and decreases the proliferation of cells. Our study suggests that SB can be exploited as a potentially useful herbal derivative for the daily prevention of various diseases caused by impaired mitochondrial fission. Flow Cytometry of Cell Cycle The cell cycle was measured by Cell Cycle Detection Kit (KeyGen BioTECH, Nanjing, China). Cells were harvested after 24 h of SB (0, 12.5, 25, and 50 mM/l) treatment, washed with PBS twice, and fixed with 70% ethanol at 4 C overnight. Cells were washed twice with PBS and incubated with RNase A for 30 min, then stained with prodium iodide (PI) in the darkroom. | 149 | 221281003 | 0 | 16 |
The cell cycle was analyzed by flow cytometry (CytoFLEX, Backman Counter, Bria, CA, USA). Determination of Relative mtDNA Copy Number The total DNA of SB treatment cells (0, 12.5, 25, and 50 mM/ l) was extracted using TRIzol reagent (Invitrogen, Carlsbad, CA, USA). The RT-qPCR analysis was used to determine the relative mtDNA copy number. The qPCR amplification reaction was performed via SYBR Green chemistry using LightCycler ® 96 Real-time PCR system (Roche, Basel, Switzerland). The mtDNA was synthesized and amplified according to the manufacturer's instructions as described previously 17 . Measurement of MMP MMP was determined using MMP assay kit with JC-1 (Beyotime Institute of Biotechnology, Haimen, China) as described according to the manufacture's instruction. For each group (0, 12.5, 25, and 50 mM/l of SB treatment), JC-1 reagent was added and incubated for 20 min at 37 C. Cells was washed twice with PBS and detected by the fluorescence microscopy (Olympus FV1000, Tokyo, Japan). Living cells exhibited red fluorescent, whereas dead or dying cells exhibited green fluorescence. Measurement of Intracellular ROS and ATP The intracellular ROS levels of each group (0, 12.5, 25, and 50 mM/l of SB treatment) was determined using a ROS assay kit (Beyotime Institute of Biotechnology). Cells were stained with fluorescence dye DCFH-DA (10 mM/l) for 20 min in a darkroom and detected with flow cytometry. Cellular ATP levels were measured using an ATP Assay Kit (Beyotime Institute of Biotechnology). The assay is based on luciferase's requirement (PerkinElmer, Waltham, MA, USA) for ATP in producing light. Luminescence was read, and | 150 | 221281003 | 0 | 16 |
values were calculated based on an ATP standard curve. Immunofluorescent Staining Cells of each group (0, 12.5, 25, and 50 mM/l of SB treatment) were fixed in 4% paraformaldehyde for 15 min and permeabilized with 0.1% Triton X-100 for 15 min, blocked with 5% bovine serum albumin in PBS for 1 h, and incubated with MitoRed (KeyGEN BioTECH, Jiangsu, China) for 1 h in darkness at room temperature (RT) to detect the mitochondrial morphology. Then, cells were incubated with 4 0 ,6-diamidino-2-phenylindole (Beyotime Institute of Biotechnology) in darkness at RT for 5 min. Samples were washed twice with PBS and imaged under a confocal microscope (LSM800, Carl Zeiss, Oberkochen, Germany). RNA Extraction and qPCR After 24 h treatment of SB (0, 12.5, 25, and 50 mM/l), cells were collected. Total cellular RNA was extracted from cells using TRIzol reagents and then subjected to qPCR analysis by SYBR ® Premix Ex TaqTM II (Tli RNaseH Plus, TaKaRa, Tokyo, Japan). b-Actin was used as internal controls. Western Blotting The antibodies against Drp1, CDK2, cyclin E1, and b-actin were purchased from Affinity Biosciences (OH, USA). For western blotting, cells of each group (0, 12.5, 25, and 50 mM/l of SB treatment) were harvested and lysed with RIPA (Beyotime Institute of Biotechnology) buffer for 30 min, then centrifuged at 12,000Âg for 15 min and the supernatant was collected. The proteins were quantified by the BCA Protein assay kit (Thermo Fisher Scientific, Inc.). The protein levels of Drp1, CDK2, cyclin E1, and b-actin from the cells were measured by the FluorChem | 151 | 221281003 | 0 | 16 |
E™ system (ProteinSimple, San Francisco, CA, USA). Statistical Analysis All data are expressed with the means + standard deviations and analyzed by SPSS20.0 (IBM, Armonk, NY, USA). Multiple comparisons were analyzed by Tukey's test. The values were considered statistically significant when P < 0.05. SB Promoted Cell Proliferation in Human AC16 Cardiomyocytes, LO2 Hepatocytes, and Human Proximal Tubular Epithelial HK2 Cells Cell proliferation experiments were used to evaluate the potential effects of SB on normal cell progression. As shown in Fig. 1A, SB increased the viability of human AC16 cardiomyocytes, LO2 hepatocytes, and human proximal tubular epithelial HK2 cells in a dose-and time-dependent manner. We further detected the proliferation of normal cells by the colony formation assay and EdU incorporation assay after SB treatment and found that SB significantly increased the normal cell proliferation (Fig. 1B, C). To further investigate the role of SB in the proliferation of AC16, LO2, and HK2 cells, we assayed the cell cycle by flow cytometry. The analysis of the cell cycle showed that the ratio of cells at the G1 to S phase was distinctly increased by SB (Fig. 2A). CDK2 and cyclinE1 protein levels were significantly increased in the normal cells treated with SB in dosedependent manners (Fig. 2B). Taken together, these results suggest that SB could promote normal cell proliferation in vitro. SB Increased Mitochondrial Function in Human AC16 Cardiomyocytes, LO2 Hepatocytes, and Human Proximal Tubular Epithelial HK2 Cells Mitochondrial function plays a pivotal role in cell progression. Mitochondria are highly dynamic organelles that produce ATP to provide | 152 | 221281003 | 0 | 16 |
cellular energy. To investigate the effect of SB on mitochondrial form and function, the production of ATP was evaluated. Our results revealed that SB dose-dependently promoted ATP production in AC16, LO2, and HK2 cells (Fig. 3A). MMP, closely related to cellular ATP production, was significantly increased in AC16, LO2, and HK2 cells in a dose-dependent manner (Fig. 3B). Maintenance of the mtDNA copy number is essential for the preservation of mitochondrial form and function 18 . The results showed that the mtDNA copy number was dramatically increased in a dose-dependent manner upon SB treatment (Fig. 3C). Mitochondria are the primary source of ROS in most cells. Moderate levels of ROS are needed to maintain the function of normal cells. These results showed that the level of ROS was increased in normal cells treated with SB (Fig. 3D). Hence, our findings suggest that SB could improve the mitochondrial function in AC16, LO2, and HK2 cells. SB Promotes Cell Proliferation Through Drp1-Mediated Mitochondrial Fission in Human AC16 Cardiomyocytes, LO2 Hepatocytes, and Human Proximal Tubular Epithelial HK2 Cells The highly mitochondrial dynamic network is tightly regulated by mitochondrial fission. MitoRed staining was used to observe the mitochondrial morphology. More fluorescence intensity and excessive mitochondrial fragmentation were observed in normal cells treated with SB (Fig. 4A). Then, we would like to find out whether Drp1 plays a role in SBinduced mitochondrial fission. As shown in Fig. 4B, SB treatment substantially increased the protein expression of Drp1 in a dose-dependent manner. The gene level of Drp1 was also increased in normal | 153 | 221281003 | 0 | 16 |
cells with the treatment of SB (Fig. 4C). We then investigated the underlying mechanism of SB promoting cell proliferation by Drp1. Drp1-specific siRNA was used to knock down the expression of Drp1 in cells. As shown in Fig. 5A, Drp1-siRNA efficiently reduced the expression of Drp1 in cells. Drp1 has been proved as a major regulator in the cell cycle. Compared with the negative control group, SB significantly decreased the percentage of cells in G1/S phase (Fig. 6A), and reversed the increase of CDK2 and cyclinE1 protein expressions in Drp1-siRNA -transfected cells (Fig. 5B). Similarly, colony formation assay and EdU incorporation assay revealed that silencing Drp1 could reverse the pro-proliferative effects of SB in cells (Fig. 7A, B). These results suggest that SB promotes cell proliferation through up-regulating Drp1, which is an essential mediator in mitochondrial fission. Discussion As the essential metabolic organs of the body, heart, liver, and kidney have their unique capacity to regulate their growth and mass. However, it may lead to a cause of death when these metabolic organs get compromised and disable in the body [19][20][21] . Thus, the potential for regeneration of these metabolic organs could promote a quick patch-up repair. SB, a predominant flavonoid component extracted from the fruits and seeds of S. marianum L, has proposed to have hepatoprotection 2 , cardioprotection 22 , metabolic syndrome alleviation 23 , and anticancer efffects 5 . Recently, studies reported that SB can affect the function of mitochondria, but the mechanisms are still not clear. In this study, we are the | 154 | 221281003 | 0 | 16 |
first to demonstrate that SB could dose-dependently increase cell proliferation in human AC16 cardiomyocytes, LO2 hepatocytes, and human proximal tubular epithelial HK2 cells. The increased ATP content, mtDNA copy number, MMP, and ROS formation indicated that SB could benefit the mitochondrial formation and function. Moreover, we also found that SB promoted Drp1-mediated mitochondrial fission to improve mitochondrial function and formation. These data collectively suggest that SB treatment promotes human AC16 cardiomyocytes, LO2 hepatocytes, and human proximal tubular epithelial HK2 cells proliferation through Drp1-mediated mitochondrial fission. In eukaryotic cells, the cell cycle plays a vital role in cell proliferation. During the G1 phase of the cell cycle, energy and biosynthetic capacity are accumulated for the duplication of the genome and cellular biomass. The G1/S transcriptional program initiates at the late G1 phase, in which DNA and protein synthesis are prepared for beginning a new round of proliferation 24 . Cyclins and cyclindependent kinases (CDKs) are checkpoints to monitor the cell cycle progression. In mammalian cells, cyclinE1 is an activator of CDK2, and the active peak of cyclinE1/ CDK2 kinase complex is required for cell cycle transition from the G1 phase to the S phase 25 . In the present study, we found that SB promoted cell cycle G1/S transition and increased the protein expressions of CDK2 and cyclinE1 in human AC16 cardiomyocytes, LO2 hepatocytes, and human proximal tubular epithelial HK2 cells, which demonstrated that SB directly triggered the progression from G1 phase to S phase in normal cells, consequently promoting cells proliferation. Cell cycle progression is promoted | 155 | 221281003 | 0 | 16 |
by the mitochondrial dynamic changes, especially in G1-to-S phase progression. Mitochondrial dynamic is important for keeping mtDNA The expression levels of CDK2 and cyclin E1 were determined by western blotting. Values (mean + SDs) were obtained from at least three independent experiments. *P < 0.05 and **P < 0.01, and ***P < 0.001, versus negative control siRNA group. # P < 0.05 and ## P < 0.01, versus SB treatment group by one-way ANOVA with Tukey's test. ANOVA: analysis of variance; SB: silibinin; SD: standard deviation. were obtained from at least three independent experiments. *P < 0.05 and **P < 0.01, and ***P < 0.001, versus negative control siRNA group. # P < 0.05 and ## P < 0.01, versus SB treatment group by one-way ANOVA with Tukey's test. ANOVA: analysis of variance; SB: silibinin; SD: standard deviation. distribution and preserving the integrity of mtDNA 26 . With a unique giant and hyperfused network, mitochondria display higher ATP producing ability at the G1/S phase cell cycle 27 . ATP, generated in mitochondria, has been considered as the most crucial single-molecule supplying energy in life. ROS production is correlated to the ATP producing ability in mitochondria. Mitochondrial fission plays an important role in keeping ROS levels in check. Moderate levels of ROS could promote cell proliferation and survival 28 . MMP, generated by mitochondrial respiratory chains in mitochondrial inner membranes, depended on energy to be generated and maintained 29 . MMP is utilized for importing protein into the mitochondria 30 . Consistently, we observed a promoting effect | 156 | 221281003 | 0 | 16 |
of increasing mitochondrial fission on ATP production and ROS production in cells treated with SB. Moreover, we found that elevated mitochondrial fission promotes mitochondrial MMP and mtDNA copy number through upregulating Drp1 expression. Mitochondrial morphology and function are maintained through the mitochondrial dynamic. In mitochondrial dynamic, mitochondrial fission facilitates mitochondrial distribution and segregate damaged sections from mitochondria 31 . Drp1, a cytosolic protein, is a major mediator in mitochondrial fission. With directly controlling mitochondrial morphology, Drp1 is crucial for cell proliferation, death, metabolism, and ROS production 32 . Our study has shown that SB enhanced the expression of Drp1 and led to more mitochondrial fragmentation and fewer tubules in normal cells. Some main cell cycle mediators can directly regulate mitochondrial dynamics. During the regulated cell cycle, Drp1 has been demonstrated as a major mediator in cell cycle control. The major cyclins have functionally or molecularly links with Drp1 activity 33 . In the previous study, we found that SB inhibited cervical cancer cell proliferation through inducing G2/M cell cycle arrest via activation of the Drp1 34 . In this study, SB promoted normal cell proliferation through facilitating G1/S transition. The promotion of G1/S transition could accumulate the energy and biosynthetic capacity for a new round of proliferation, while the arrest of G2/M cell cycle inhibits the cell enters mitosis. Thus, to verify whether the SB-promoted G1/S phase in the normal cell cycle is mediated by Drp1, we investigated whether the downregulation of Drp1 could reduce SB-induced cell proliferation. In this study, knockdown of Drp1 decreased the | 157 | 221281003 | 0 | 16 |
cell colony number and proliferation, reduced the ratio of G1/S phase in the cell cycle and the expression of CDK2 and cyclinE1. These evidences indicated that SB-induced mitochondrial fission played a critical role in the key G1/S cell cycle transition, which contributed to normal cell proliferation. In the further study, we would like to focus on the specific mechanism of SB promoting the proliferation of normal cells and inhibiting the growth of tumor cells by mediating Drp1, to find a promising medicine for the treatment of cancer. Conclusions Our study illustrates that SB promotes G1/S cell cycle transition through Drp1-mediated mitochondrial fission, and thus promotes proliferation in human AC16 cardiomyocytes, LO2 hepatocytes, and human proximal tubular epithelial HK2 cells. Therefore, SB may be a potentially useful herbal derivative for the daily prevention and regeneration of various diseases caused by impaired mitochondrial fission. | 158 | 221281003 | 0 | 16 |
Decoding the Real-Time Neurobiological Properties of Incremental Semantic Interpretation Abstract Communication through spoken language is a central human capacity, involving a wide range of complex computations that incrementally interpret each word into meaningful sentences. However, surprisingly little is known about the spatiotemporal properties of the complex neurobiological systems that support these dynamic predictive and integrative computations. Here, we focus on prediction, a core incremental processing operation guiding the interpretation of each upcoming word with respect to its preceding context. To investigate the neurobiological basis of how semantic constraints change and evolve as each word in a sentence accumulates over time, in a spoken sentence comprehension study, we analyzed the multivariate patterns of neural activity recorded by source-localized electro/magnetoencephalography (EMEG), using computational models capturing semantic constraints derived from the prior context on each upcoming word. Our results provide insights into predictive operations subserved by different regions within a bi-hemispheric system, which over time generate, refine, and evaluate constraints on each word as it is heard. Introduction Spoken language comprehension involves a variety of rapid computations that transform the auditory input into a meaningful interpretation. When listening to speech, our primary percept is not of the acoustic-phonetic detail, but of the speaker's intended meaning. This effortless transition occurs on millisecond timescales, with remarkable speed and accuracy and without any awareness of the complex computations on which it depends. How is this achieved? What are the processes and representations that support the transition from sound to meaning, and what are the neurobiological systems in which they are instantiated? Understanding | 159 | 221404245 | 0 | 16 |
the meaning of spoken language requires listeners to access the meaning of each word that they hear and integrate it into the ongoing semantic representation in order to incrementally construct a syntactically licensed semantic representation of the sentence (Tyler and Marslen-Wilson 1977;Marslen-Wilson and Tyler 1980;Kamide et al. 2003;Hagoort et al. 2009). Research to date provides a broad outline of the neurobiological language system and of the variables involved in language comprehension (Hickok and Poeppel 2007;Marslen-Wilson and Tyler 2007;Friederici 2011;Kutas and Federmeier 2011;Price 2012;Bornkessel-Schlesewsky and Schlesewsky 2013;Hagoort 2013;Matchin and Hickok 2020), but surprisingly, little is known about the specific spatio-temporal patterning and the neurocomputational properties of the incremental processing operations that underpin the dynamic transitions from the speech input to the meaningful interpretation of an utterance. This is our goal in the present study where we probe directly the dynamic patterns of time-sensitive neural activity that are elicited by spoken words, focusing on the semantic constraints they generate on upcoming words and the incremental processes that combine them into semantically coherent utterance interpretations. We use computational linguistic analyses of language corpora to build quantifiable models of semantic constraint and mismatch, where the latter reflects the processing demands of interpreting the upcoming word given the properties of prior constraints (Hale 2001;Levy 2008). Based on these cognitive models, we employ representational similarity analysis (RSA) to probe the different types of neural computation that support dynamic processes of incremental interpretation, using source-localized MEG + EEG (EMEG) imaging to capture the real-time electrophysiological activity of the brain. RSA enables us to | 160 | 221404245 | 0 | 16 |
compare the (dis)similarity structure of our theoretically relevant models with the (dis)similarity structure of observed patterns of brain activity, revealing how different information types are encoded in different brain areas over time. In a previous EMEG study, involving single spoken words, we used these methods to map out the spatio-temporal dynamics of the word recognition process (Kocagoncu et al. 2017). Using RSA to test quantifiable cognitive models of key analysis processes as they occur in real time in the brain, we identified the cortical regions that support the early phonological and semantic competition between cohort candidates as a word is heard, and the dynamic process of convergence on a single candidate and its unique semantic representation as the uniquenesspoint (UP) approaches [i.e., the point at which the word can be differentiated from its word-initial cohort and is uniquely recognizable (Marslen-Wilson 1987)]. Hence, identifying the UP plays an important role in interpreting the timing of linguistic processing with respect to the input word. In a subsequent study, placing spoken words in a minimal phrasal context (e.g., yellow banana), we constructed RSA models of the semantic constraints generated by the adjective (yellow) to determine how these interacted with the processing of the following noun (banana). Consistent with previous behavioral and ERP results (Marslen-Wilson 1975;Kamide et al. 2003;DeLong et al. 2005;Bicknell et al. 2010), we found early effects of prior probabilistic semantic constraints on lexical processing (within 150-200 ms of word onset), where the timing of these effects reflects the prior access of potential word candidates driven by the | 161 | 221404245 | 0 | 16 |
sensory input (Klimovich-Gray et al. 2019). These studies suggest an underpinning lexical access process where lexical contents can be made available very soon after word onset for interaction with contextual constraints. In the context of these two studies, the current study aims to determine how these rich contextual constraints incrementally combine words into a meaning interpretation and how this interpretation modulates the processing of subsequent words in the utterance. Critical to this study is the development of the appropriate quantifiable measures of the relevant properties of the sentential processing environment, as the basis for the RSA models used to probe the real-time brain activity elicited by hearing the test sentences. Within the broad context of predictive processing frameworks (Kuperberg and Jaeger 2016), we investigated the role of semantic constraint elicited by the incrementally developing context in sentences such as "The experienced walker chose the path," including its subject, verb, and object, in generating a message-level interpretation. To do this, we used language models of constraint and mismatch derived by combining the behavioral responses from sentence completion studies with the latent Dirichlet allocation (LDA) approach of topic modeling (Griffiths and Steyvers 2004). These models were used to construct RSA models of semantic constraints, as they evolve over a spoken utterance, and to look at the spatiotemporal pattern of model fit for each processing dimension being tested (Kocagoncu et al. 2017). Importantly, the cognitive models that test for effects of semantic constraints and their integration into the developing sentence are probabilistic and experiential in nature, reflecting language as | 162 | 221404245 | 0 | 16 |
people experience it in the real world and providing the type of quantifiable data necessary to calculate rich multivariate representational models. This avoids the limitation of relying on categorical distinctions between stimuli which fail to capture the multifaceted richness of linguistic representations and the probabilistic nature of language. Our primary interest here is in what we call "combined constraints" on upcoming words, the cumulative constraints generated by the set of words comprising the prior context. In this study, we developed a set of contextual constraint models in order to illuminate the temporal progression of predictive processing as each word [i.e., verb and complement noun (CN)] incrementally unfolds over time. This enables us to illustrate the spatiotemporal dynamics of the cumulative effects of constraints and to determine how far these constraints are neurally expressed. In common with recent accounts of incremental processing of speech inputs, we expect to see the computation of constraints as each word is being recognized (Marslen-Wilson 1975;Marslen-Wilson and Tyler 1980;DeLong et al. 2014). The RSA models, as described above, primarily focus on modeling these constraints and the relative timing with which they appear as the utterance unfolds over time. We also investigate the mismatch effect between the context and a target word (CN) that captures the difficulty of semantically processing the target word with respect to the constraint imposed by the prior context, based on its semantic properties. Together the timing and location of the effects captured by these models reveal a picture of when and where the human brain activates and utilizes | 163 | 221404245 | 0 | 16 |
constraints at the semantic level. Overview To determine the spatiotemporal neural properties of incremental semantic interpretation during language comprehension, we developed models of the incremental constraints that the context imposes on the meanings of upcoming words and the mismatch between an upcoming word and its fit into the prior context. We tested these models against the spatiotemporal properties of the source-localized EMEG data to compare the similarity structure of our theoretically relevant models. We tested for the timing of the model fit generated for these models at different time points within a language mask that includes a set of brain regions comprising a bilateral fronto-temporoparietal language system, which has been frequently reported in the literature (Binder et al. 2009). We asked when and where each of our key models-of semantic constraint, and mismatchwould fit the brain data, when and where is there an effect of the subject noun phrase (SNP) semantic constraint? how does it change as a subsequent verb is processed? and what is the scope of these constraint effects on upcoming words? In order to model incrementally developing constraint over time, we obtained measures of semantic prediction at two different points in a sentence-immediately after the SNP ["the experienced walker"] and after the combination of the SNP + verb Table 1 All semantic models used in this study and the epochs in which they were tested against the brain data. The epoch(s) in which each model was tested was chosen specifically to investigate the cascade of incremental predictive processes: 1) emerging with the early | 164 | 221404245 | 0 | 16 |
activation of the SNP constraint on verbs and on CNs before the verb is recognized; 2) evolving with a verb being incorporated into the context once the verb is recognized; and 3) facilitating the semantic interpretation once the CN is recognized. The average duration of each word to which each epoch is aligned is indicated by the bracket [mean ± standard deviation (SD)] Epochs (0-600 ms in duration) ["the experienced walker chose . . . "]. In this way, we aimed to characterize the changing patterns of prediction as a verb is combined with the initial SNP context. To do this, we conducted two separate behavioral studies with different participants in which they were asked to complete a sentence either after hearing the SNP fragments (study 1) or after hearing the SNP + verb fragments (study 2). We then extracted main verbs from the first behavioral study and CNs from the second behavioral study, allowing us to infer the predictive state of the brain throughout the sentence. However, in natural speech comprehension, prior constraints are relatively broad, so that specific words are rarely strongly predicted (Luke and Christianson 2016). Particularly, during the early stage of sentence processing, the context (SNP or SNP + verb) rarely provides a strong prediction of a particular upcoming word, leading to high uncertainty (entropy) in wordlevel constraints (Kuperberg 2016). Therefore, we applied topic modeling to each unique word provided by participants in the behavioral studies, in order to characterize constraints derived from the rich semantic (topic) representation associated with each unique | 165 | 221404245 | 0 | 16 |
word in a Bayesian framework of incremental predictive processing. To model prediction at a more abstracted semantic level, we combined the topic distributions of the continuation data into semantic "blends" of word candidates, modeling the conditional probability distribution P topic/full context (see Materials and Methods: Incremental Models of Predictive Processing). Then, we computed entropy (see Materials and Methods: Spatiotemporal Searchlight RSA) of the blend to quantify the overall constraint strength, which was tested against the EMEG data during relevant epochs as described in Table 1 (see also Fig. 1), in order to investigate the incremental development of semantic constraint. Finally, in order to investigate how the constrained words are evaluated and incorporated into the prior context (SNP + verb), we also characterized the EMEG data using a pattern of mismatch between the predicted and the target semantics (see Materials and Methods: EMEG Recordings and MRI Acquisition). In light of the claims that semantics is represented bilaterally (Price 2010(Price , 2012Wright et al. 2012), our approach provides an opportunity to determine whether different kinds of semantic computations are represented differentially across the hemispheres. We expected the predictive computations based on this information to involve bilateral anterior temporal and frontal areas with the right hemisphere (RH) involved in the construction of a broader semantic representation and the engagement of the context (Beeman and Chiarello 1998;St George et al. 1999;Seger et al. 2000;Jung-Beeman 2005). Participants Fifteen participants (7 females; average age: 24 years; range: 18-35 years) took part in the study. They were all native British English speakers and | 166 | 221404245 | 0 | 16 |
right-handed with normal hearing. Two participants were excluded from the analysis: one because of sleepiness during the EMEG study and the other because of poor quality EEG recordings. Informed consent was obtained from all participants and the study was approved by the Cambridge Psychology Research Ethics Committee. Stimuli We constructed 200 spoken sentences consisting of an SNP (e.g., "the experienced walker"), followed by a verb (e.g., "chose") which in turn was followed by a CN (e.g., "path"). The sentence sets were constructed in the following way. First, we chose verbs from the VALEX database (Korhonen et al. 2006) that occurred with (at least) two different complement structures: one was a simple transitive direct object (DO) structure (e.g., " . . . chose the path . . . ") and the other was one of three other possible complement structures including sentential complement (SC; " . . . denied that the court . . . ), infinitival complement (INF; " . . . wanted to become . . . "), and prepositional phrase complement (PP; " . . . fled to the forest . . . "). For 72% of the stimuli, the DO complement structure was more frequent [according to the subcategorization frame (SCF) information in VALEX; (Korhonen et al. 2006)] with the average probability of 0.499 ± 0.12 (mean ± SD). By adding some variability to the function words of the complement phrase, we aimed to improve the generalizability of our results to any natural spoken sentence with varying subcategorization structures. To ensure variability in | 167 | 221404245 | 0 | 16 |
the predictability of the CNs, we varied the probability of these nouns with the preceding verb and the complement function word according to Google Books n-gram frequencies. Note that this variability was controlled when running the analysis by including the frequency of a word to which the epoch was aligned to as one of the covariates and partialling out when correlating the data and model representational dissimilarity matrices (RDMs) [e.g., SN frequency at epoch 1, verb frequency at epoch 2, and CN (content word) frequency at epoch 3]. This process resulted in 200 sentences with four repetitions of the SNP + verb combination (see Fig. 2), consisting of varying complement structures (i.e., DO, SC, INF, and PP) with different complement content words. This ensured sufficient variability between trials in the ease with which the content Figure 1. Overview of the epochs in the experiment in relation to the incremental processing: Epoch 1: Activation of SNP constraint; Epoch 2: Modification of SNP constraint based on the Verb; and Epoch 3: Evaluation of SNP + V constraint on CNs. The epochs were each defined relative to an alignment point (AP) such that Epoch 1 is aligned to the SN onset, Epoch 2 is aligned to the verb onset, and Epoch 3 is aligned to the CN onset. Each epoch lasted for 600 ms which included the average duration of each content word plus 1 SD. UP = the uniqueness point of a word (the earliest point in time when the word can be fully recognized after removing all | 168 | 221404245 | 0 | 16 |
of its phonological competitors). Figure 2. Design of the experimental stimuli. Each sentence contained a key main verb ("chose") followed by a complement function word ("the" or "to") to vary the complement in terms of the SCF preference of a preceding verb. A function word was followed by a noun or a verb that was either consistent with the verb's preferred continuation or less preferred continuation. word in the complement could be integrated into the ongoing sentential representation, given the constraints provided by the preceding context. Just as for the lexical frequency, we controlled for the repetition effect of the SNP + verb combination by including it as another covariate. In summary, we partialled out the effects of 1) lexical frequency of a word to which an epoch is aligned and 2) repetition of stimuli across trials. The sentences were spoken by a native female British English speaker and were recorded in a soundproof booth. In the experiment, participants were asked to listen to these sentences attentively while we recorded their brain activity using EMEG. There was no explicit task for them to perform since tasks are known to invoke domain general brain systems over and above any domain-specific language effects (Campbell and Tyler 2018). All stimuli were pseudo-randomized and counter-balanced across participants. We followed the standard procedure for presenting auditory stimuli as in our previous studies (Kocagoncu et al. 2017;Klimovich-Gray et al. 2019). Incremental Models of Predictive Processing In this study, we focused on the two different incremental computations: 1) constraint and 2) evaluation in | 169 | 221404245 | 0 | 16 |
order to investigate the neurobiological underpinnings of how the preceding context guides the interpretation of an upcoming word. To do this, we combined behavioral data with computational models of semantics as described below. Behavioral Studies To model incrementally evolving constraints over the SNP, verb, and CN, we conducted two separate behavioral studies. In the first experiment, 24 participants (who did not take part in the main experiment or the second behavioral study) heard each unique SNP (e.g., "The experienced walker . . . ") and provided a sentence continuation after the SNP (e.g., " . . . hiked through the mountains," " . . . chose a less travelled path," etc.). We extracted the main verb from each sentence continuation and used these data with topic representations (see Materials and Methods: Stimuli) to capture predicted verb semantics. In the second experiment, we asked 31 participants (who did not take part in the main experiment or the first behavioral study) to provide a sentence continuation after hearing each unique SNP + verb in our stimuli (e.g., "The experienced walker chose . . . "), for example, " . . . the shorter route," " . . . the hardest path," etc. Note that we only used the noun responses which are considered to be an object of the preceding verb (e.g., nouns in DO or PP complements which we refer to as CNs throughout this paper) in order to remove any syntactic or thematic variability when modeling semantic interpretation of the CN. For example, any noun responses | 170 | 221404245 | 0 | 16 |
in an SC were removed since they are often treated as a new subject instead of an object (e.g., "The walking couple heard that the farm was open to visitors"). On average, this left 18 CN responses for every stimulus from 31 participants. Any stimulus with less than 4 responses were excluded from the analysis. Semantic Modeling We trained a probabilistic topic model based on LDA (Griffiths and Steyvers 2004). It develops a generative probabilistic model that assigns a word to different latent dimensions in a way that maximizes the posterior of the model. Such latent dimensions are often called "topics" which describe the semantic content of a word in the form of a probability distribution. In this study, topic distributions (consisting of 100 topics) associated with each content word were generated using corpus-based tensor data (Baroni and Lenci 2010). Instead of using raw cooccurrence frequency, we used local mutual information from the tensor because it normalizes the effect of lexical frequency of individual items when computing the semantic relation (cooccurrence) between two words. Furthermore, instead of using all co-occurrence data in the tensor, we only selected specific subsets in order to capture syntactically licensed semantic representation specifically with respect to a word in the context. In particular, we focused on the incremental and cumulative development of the semantic constraint from an SN (agent) to a CN. To do this, we trained two separate topic models based on the cooccurrence between 1) SN and verb (SN-V) and 2) the preceding words including SN and verb and | 171 | 221404245 | 0 | 16 |
CN (object) (SNV-CN). These models provided different aspects of semantic representation relevant for incremental predictive processing as follows: 1) the first (SN-V) topic model was trained specifically to characterize the predictive representation of SNs on upcoming verbs and the specific semantic content of verbs that are syntactically licensed with respect to the preceding SNs and 2) the second (SNV-CN) topic model was trained specifically to characterize the predictive representation of SNs and verbs on CNs and the specific semantic content of CNs that is syntactically licensed with respect to the preceding SNs and verbs. See Section 1 in Supplementary Materials for more details regarding model training and parameter settings. See Supplementary Fig. S1 for illustrations of SNV-CN topic model. Modeling Predictive State: Semantic Blends After obtaining the behavioral responses from the two sentence completion studies (verbs from the first and CNs from the second study) and the topic representation associated with a set of unique responses for each sentence, we combined them to generate an overall representation across multiple responses (for either the unique verbs or the CNs) to capture consistent semantic content shared by the set of verbs predicted by the SNP or by the CNs predicted by the SNP + verb. In this way, we aimed to model predictive activation of semantic contents associated with multiple lexical items based on the preceding context. The semantic blend was computed as below blend words = P topic|full context = word P topic|word P word|full context , where P word|full context is a probabilistic weight associated with | 172 | 221404245 | 0 | 16 |
a given word (see Behavioral Studies) and P topic|word is the topic distribution for word (see Incremental Models of Predictive Processing). Based on this formula, we constructed three different "blend" vectors. SN-V verb blend. This blend is designed to model the SNP constraint on upcoming verbs. We counted the (post-SNP) verb responses from the first sentence completion study. Then, the frequency count associated with each unique verb that was produced by participants was, in turn, used as a weight to the topic distribution of the verb. From the topic model trained specifically on the SN-verb co-occurrence data, we obtained the topic representation of each unique verb which was weight-combined as expressed in the formula above (i.e., P verb_topic|verb P verb/SNP ). SNV-CN verb blend. Despite being a verb blend, this second blend model is designed to model the SNP constraint on CNs (rather than its constraints on the verb), via the set of predicted verbs obtained from the first behavioral study. We counted the (post-SNP) verb responses and the frequency count associated with each unique verb that participants produced as above. However, we obtained the verb topic distributions from a second topic model trained specifically on the mixed SN-CN and verb-CN co-occurrence data, reflecting the predictive representation on upcoming CNs. Then, each predictive representation (topic-context distribution) of unique verbs in relation to CNs was weight-combined as expressed in the formula above (i.e., P CN_topic|verb P verb|SNP ). SNV-CN CN blend. The third blend focused on modeling the combined constraint of SNP + verb on CNs. To | 173 | 221404245 | 0 | 16 |
do this, we counted the (post-SNP + verb) CN responses from the second sentence completion study. Then, we used the CN topic distributions from the second topic model trained specifically on the mixed SN-CN and verb-CN co-occurrence data, reflecting the topic representation of each unique CN in relation to the preceding subjects and verbs. Then, just as the other blends, each topic representation (target-topic distribution) associated with each unique CN was weight-combined as expressed in the formula above (i.e., P CN_topic|CN P CN|SNP + verb ). In summary, we generated the following blends whose entropy is designed to address how constraints incrementally change and develop. Modeling Predictive Constraint: Entropy Entropy is a metric designed to quantify the amount of uncertainty in distributional models. Therefore, entropy of the blend distributions in this study reflects the strength of semantic constraint regarding upcoming words (higher uncertainty = weaker constraint). However, in any topic models, each topic varies in terms of the types of words it prefers with different probabilities. This naturally leads to variations in semantic dispersion across topics, potentially undermining the estimation of true semantic entropy. Here, we addressed this issue by linearly combining entropy with topic dispersion as following: where w is a vector of semantic dispersion across topics and h P(x) is a vector containing local entropy values. In this paper, Figure 3. Reducing entropy in prediction before (left panel) and after (right panel) a verb is incorporated into the SNP context. The topic distributions on the top are the semantic blends of predicted CNs by | 174 | 221404245 | 0 | 16 |
SNP and SNP + verb, respectively. Entropy associated with each of the two distributions is also described. The word boxes below the distributions show a set of preferred words based on the predicted topics. we denote the term entropy and notation H to refer to this dispersion-corrected entropy. The semantic dispersion was calculated by averaging pair-wise cosine distances between topic distributions among every pair of words within a topic (Lyu et al. 2019). If the target words preferred by a topic have similar distributions, the average cosine distance will be low. Then, this "within-topic" semantic dispersion was linearly combined with the local entropy values to manipulate the contribution of each topic to the degree of overall constraint strength across topic candidates. In this way, we effectively controlled for "withintopic" dispersion when computing "between-topic" constraint. Each of the semantic blends described above was taken as an input to the entropy function (Fig. 3), generating three semantic constraint models which were tested against the spatiotemporal patterns of neural activity at specific epochs (Table 1) Modeling Evaluation: Constraint Mismatch Semantic evaluation refers to a process of resolving mismatch between a current input and the predicted candidates based on the preceding context, leading to an accurate interpretation of the input that fits the context. To model this process, we quantified the degree of mismatch by computing cosine distance between the semantic representations of the predicted CNs and the target CN. As described in Materials and Methods: Behavioural studies, we excluded any items that do not contain CN (i.e., a noun | 175 | 221404245 | 0 | 16 |
considered to be an object of a preceding verb) from the analysis because this mismatch model requires the target CN to be identified. This left us with 128 out of 200 trials. Spatiotemporal Searchlight RSA In order to determine when and where these constraint models and associated computations are neurally realized, we used spatiotemporal searchlight RSA (ssRSA) (Su et al. 2012). Each searchlight is defined for each vertex at each time-point, providing a fine-grained spatiotemporal map of neural activity. To characterize such dynamic pattern of neural activity, we constructed model RDMs using specific properties of the blended distributions across sentences described above. Since all of the model RDMs in this study were based on the summary metrics designed to capture various incremental aspects of distributional semantics, the representational geometry was characterized simply by calculating the absolute distance of the metric values between every pair of trials. Each of these model RDMs was, then, compared with the patterns expressed by the neural RDMs constructed by correlation distance between every pair of trials for each searchlight across space and time (see Fig. 4). The size of each searchlight was set as a spatial radius of 10 mm and a temporal radius of 30 ms. ssRSA was performed within a language mask, which included all anatomical regions in a set of regions encompassing bilateral fronto-temporo-parietal regions, using the Harvard-Oxford cortical atlas (Kocagoncu et al. 2017;Lyu et al. 2019). See Figure 4 for surface rendering of this language mask. These regions are reliably shown to be involved in language processing | 176 | 221404245 | 0 | 16 |
(Binder et al. 2009;Price 2010Price , 2012. . A schematic illustration of the searchlight RSA of spatiotemporal source-space EMEG data. The bilateral language mask used in this study is surface-rendered onto the brain template in the figure for visualization. Since the source-space EMEG data inherently vary across time and space, we calculated the similarity of the spatio-temporal patterns of brain activities for different trials based on measurements within each searchlight sphere with a spatial radius of 10 mm and a temporal radius of 30 ms. We used 1-Pearson's correlation between pairs of trials as the distance metric to compute an RDM for each searchlight, yielding a searchlight map of data RDMs. Each data RDM is then correlated with each model RDM using Spearman's correlation. This Spearman's correlation was computed for each subject and the significance of the correlation at each searchlight location was tested using one-sample t-test (H0: Spearman correlation will be zero). The figure illustrates this process, yielding a time-course of t-values across spatiotemporal searchlights. EMEG Recordings and MRI Acquisition MEG data were recorded on a VectorView system (Elekta Neuromag) using 306 sensors (102 magnetometers and 204 planar gradiometers), located in a magnetically shielded room at the MRC Cognition and Brain Science Unit, Cambridge, UK. In conjunction with the MEG recordings, we recorded EEG signals using an MEG compatible EEG cap (Easycap, Falk Minow Services) with 70 electrodes, plus external electrodes and a nose reference. To monitor head movement in the MEG helmet, five head positioning indicator (HPI) coils attached to the scalp recorded head | 177 | 221404245 | 0 | 16 |
position every 200 ms. Blinks and eye movements were recorded by electro-oculogram (EOG) placed above and beneath the left eye and beside the left and right outer canthi. Cardio-vascular effects were recorded by electro-cardiogram (ECG) attached to right shoulder blade and left torso. To be able to co-register the EEG and MEG data to anatomical structural scans for each participant, the positions of the HPI coils and EEG electrodes were digitized relative to three anatomical landmarks (nasion, left and right peri-auricular points). In addition, a participant's head shape was digitized across the head. MEG signals were recorded with a sampling rate of 1000 Hz and any signals below 0.03 Hz were high-pass filtered. To localize the EEG and MEG data to sources on the cortical surface, structural MRI scans were acquired for each participant in a separate session using 1-mm isotropic resolution T 1weighted MPRAGE on a Siemens 3 T Prisma scanner (Siemens Medical Solutions) located at the Cognition and Brain Science Unit, Cambridge, UK. EMEG Preprocessing The raw MEG data were max-filtered (Elekta-Neuromag) to remove bad channels, to compensate for head movement using signal space separation techniques (Taulu and Simola 2006). Statistical parametric mapping 8 (SPM8; Welcome Institute of Imaging Neuroscience) was used to complete the remaining stages of EMEG preprocessing [except for independent component (IC) analysis artifact rejection]. First, a low-pass filter at 40 Hz was applied to the data using a fifth-order bidirectional Butterworth digital filter. In order to remove any physiologically driven artifacts such as blinks or cardiac signals recorded by EOG | 178 | 221404245 | 0 | 16 |
and ECG, the data signals were decomposed into ICs and each IC was correlated with vEOG, hEOG, and ECG channels. Any ICs showing very high temporal correlation (correlation >0.3) with any of these channels were removed and the remaining ICs were then visually inspected to ensure that no artifact component remained. The remaining ICs were then used to reconstruct the data. Next, three separate analysis epochs were generated by aligning the data to the onset of each of the three points of interest in each sentence (see Fig. 1). The duration of each epoch (0-600 ms) was consistent across all three epochs. This duration was chosen to cover the average duration of each word + 1 SD described in Figure 1. One epoch was aligned to the SN, another to the verb, and a third to the CN. We also calculated the uniqueness point (UP) of each of these words from CELEX database (Baayen et al. 1993) to relate the timing of neural effects to when the word is recognized. After epoching, each channel was baseline-corrected by subtracting the time-averaged data from a baseline period −200 to 0 ms relative to sentence onset (i.e., a period of silence immediately preceding the sentence). Finally, automatic artifact rejection was used to identify trials for which 15% or more sensors in any one of the three sensor types exceeded amplitude threshold (6e−11 T for magnetometers, 3e−12 T/m for gradiometers, and 2e−04 V for EEG), and these trials were rejected [an average of 15 trials were rejected (SD = 13.43)]. | 179 | 221404245 | 0 | 16 |
EMEG Source Reconstruction Source reconstruction aims to estimate the regional response within a brain using the EMEG data recorded outside the scalp. We first transformed the participants' structural MRI images into an MNI template brain, which was then inversetransformed to construct individual scalp and cortical meshes by warping canonical meshes of the MNI template brain to the original MRI space (Mattout et al. 2007). The MRI co-ordinates from individual scalp and cortical meshes were co-registered with the MEG sensor and EEG electrode co-ordinates by aligning fiducial points and the digitized head shape to the outer scalp mesh. A single-shell conductor model and a boundary element model were used as forward models for MEG and EEG recordings, respectively (the defaults in SPM8). We source-reconstructed our data based on the minimum-norm assumption in SPM8 as a prior on the source covariance (López et al. 2014). This source prior was empirically adapted to maximize the model evidence, which, in turn, was used to compute the Maximum A Posterior (MAP) source estimate. Statistics and Multiple Comparisons Correction Using the correlation time-courses for the model and data RDMs across subjects, we calculated a time-course of one-tailed tstatistic for every vertex (Fig. 4). From this point-wise statistic, we applied the cluster forming threshold (CFT) of P = 0.01 and binarized the time-courses into clusters from a set of temporally and spatially contiguous vertices (data-points). Then, we summed t-values across each of the vertices within a cluster to compute a cluster-summed t-value. In this way, we aimed to emphasize the neural clusters that | 180 | 221404245 | 0 | 16 |
are spatiotemporally distributed, while each of the vertices in the clusters shows P-value less than 0.01. For multiple comparisons correction across time-points which are not independent of one another, we ran permutation statistics (Maris and Oostenveld 2007) on the CFT output. Under the null hypothesis that our model is not correlated with the data (r = 0), we randomly permuted the sign of correlation values across different subjects and ran one-sample t-test for every time-point. For each randomization, this null time-course of t-values was converted to the time course of cluster-summed tstatistics. This random permutation process was repeated 1000 times and the cluster with the maximum t-value across all datapoints for every run was saved. This process gives 1000 clusterlevel t-values under the null hypothesis and the significance of the observed cluster-level t-values were evaluated with respect to this null distribution. Results Using RSA and model RDMs of semantic constraint and mismatch, we probed source-localized EMEG data capturing the real-time electrophysiological activity of the brain to determine the spatiotemporal properties of the cumulative incremental effects of semantic constraints. For this purpose, we directly compared the strength of semantic constraints generated by the SNP on verbs and CNs, as quantified by the entropy of P verb_topic|SNP and P CN_topic|SNP , against the multivariate patterns of neural activity over space and time. Then, we looked at the effects of the combined SNP + verb constraint by computing entropy of P CN_topic|SNP + verb . In this way, we aimed to investigate the timing and neural regions that are | 181 | 221404245 | 0 | 16 |
related to generating semantic constraints prior to a target word (i.e., verb or CN). At last, to measure the predictive effects of the incrementally developed constraint on the processing of the CN semantics, we constructed a constraint mismatch model to examine the neural effects of semantic evaluation. We report significant (P ≤ 0.05) and marginally significant (0.05 < P ≤ 0.06) effects of the models sequentially as the sentence unfolds over time. Note that all of these reported results have large effect sizes (d > 0.8; See Fig. S2 in Supplementary Materials Section 2). SNP's Adjacent Semantic Constraint (Entropy) on Upcoming Verb We anticipated that the semantics of the SNP (e.g., "The experienced walker") would generate rich constraints on the upcoming speech. To test this hypothesis, we constructed models capturing the strength of constraints generated by the SNP (e.g., entropy of P verb_topic|SNP in this section and P CN_topic|SNP in the section below). Using these entropy models, we aimed to assess the earliness of predictive computations and how they develop throughout a sentence. The results (Fig. 5a) show that the constraints on the verb generated by the SNP are significantly activated around the UP (347 ± 107 ms after the onset) of the SN as it is recognized, lasting around 300 ms from 290 to 600 ms, and are seen primarily in RH mid-anterior middle and inferior temporal areas (P = 0.032). This effect continued until the end of the SN (Epoch 1) and was not significant in Epoch 2, suggesting that listeners are actively constraining | 182 | 221404245 | 0 | 16 |
upcoming verbs as soon as they recognize the SNP and that these constraints involve only RH temporal regions. SNP's Nonadjacent Semantic Constraint (Entropy) on CN When examining constraints on nonadjacent words in a sentence (in this case, SNP constraints on the CN), we need to consider the semantic relation between the context (SNP) and the target (CN) while taking into account any words that intervene between them (in this case, the verb). Using the Bayesian Figure 5. Results of the ssRSA with the constraint and mismatch models across three epochs described in Figure 1. Each panel shows the results for different models, corresponding to each subsection in the Results. All clusters were corrected by permutation statistics with the CFT of P = 0.01 and cluster-wise significance threshold of P = 0.05 (note that marginally significant clusters with P-values between 0.05 and 0.06 are also reported). A horizontal bar in black indicates the duration of the given cluster. The three alignment points [SN (subject noun), verb, and CN onsets] are indicated by long vertical dotted lines. UP stands for "uniqueness point" estimated by the CELEX database and the shaded region in gray around the mean UP ref lects ±1 SD from the onset. Similarly, the mean offset of each word is also marked and the region shaded by gray hatch lines around the mean offset ref lects ±1 SD from the onset. approach, we computed the nonadjacent SNP constraint on CNs by taking into account the set of verbs that were predicted by hearing the SNP in | 183 | 221404245 | 0 | 16 |
the first behavioral completion study: verb P CN_topic|verb P verb|SNP . This mathematical formulation reflects the SNP constraint on CN semantics via the set of verbs predicted by the SNP collected from the first behavioral study. This set of predicted verbs can be thought of as a process of semantic competition among partially activated semantic candidates. This is similar conceptually to the notion of cohort competition for spoken language comprehension [see (Marslen-Wilson 1987) which claims that multiple, partially activated word candidates initiated by the accumulating speech input as a word is heard momentarily compete with each other until the word is recognized]. Applying topic modeling to these predicted verbs enables us to model the SNP's constraints on the CN taking into account the scope of the SNP's prediction on the intervening verb. Similar to the SNP's constraint on verbs, this nonadjacent constraint appeared around the UP of the SN starting from 270 to 590 ms after the SN onset (Fig. 5b). It involved early, relatively short-lived effects in bilateral anterior and middle temporal cortex [left hemisphere (LH): P = 0.026 from 280 to 510 ms; RH: P = 0.039 from 280 to 530 ms], which overlapped with effects in right inferior frontal areas (P = 0.026 from 270 to 590 ms; see Fig. 5b). Note that these are the results from Epoch 1 aligned to the SN onset. In a further analysis, we tested the spatiotemporal patterns of neural activity with the same nonadjacent SNP constraint model in Epoch 2 (Fig. 5b). We found a significant | 184 | 221404245 | 0 | 16 |
SNP semantic constraint effect on the CN but only in the right inferior frontal gyrus (RIFG) from the verb onset (P = 0.01; Fig. 5), lasting for 380 ms (1 SD after the mean UP), suggestive of competitive processing. We discuss the differential role of RIFG from the RH temporal regions in light of the constraints that they activate in the Discussion. SNP + Verb's Semantic Constraint (Entropy) on CN The analysis above examined the effect of the constraints imposed by the SNP on the CN mediated through verbs predicted in the behavioral test. In this section, we investigate the changes in the semantic constraint on CN as the SNP context becomes enriched by combining with its adjacent verb (i.e., after the cohort competition among the verb candidates has ceased, a process reflected in the blend model). To do this, we tested the effect of the SNP + verb constraint model on CNs [i.e., entropy of P CN_topic|SNP+verb ], in order to elucidate the neurobiological basis of the development of incremental constraints (Fig. 5c). Our results showed that right mid-anterior middle and inferior temporal areas again played a role in constraining the CNs from 60 ms after the verb onset and lasting around 500 ms (P = 0.002; Fig. 5c). This early constraint effect likely reflects the constraint driven by the event generated by the SNP, which could be largely consistent with the constraint imposed by the verb, especially when the verb is light in terms of its semantic constraint as in the majority of our | 185 | 221404245 | 0 | 16 |
sentence stimuli (see Discussion). In addition, we also found a significant cluster in left anterior middle and inferior temporal regions from 270 to 470 ms (P = 0.025) and a marginally significant cluster in left inferior frontal gyrus [LIFG (BA47/45); P = 0.06]. Based on the involvement of LATL and LBA47/45 in constraining upcoming CNs around the UP of the verb, we speculate that their role is to unify the verb into the broad semantic constraint setup by the SNP, essentially leading to a reduction in uncertainty in the constraint (see Fig. 3 and Fig. S3-1 in Supplementary Materials Section 3). The effect of the combined SNP + verb constraint persisted into Epoch 3 during which the CN is heard. This lasted until 370 ms into the CN, which is around the UP. However, this transition was associated with more posterior LH regions in middle temporal gyrus (MTG) and angular gyrus (P = 0.031; Fig. 5c). This anterior-to-posterior transition may underscore the process from constructing to utilizing the context-driven semantic constraint when hearing the CN in a sentence. Semantic Mismatch between the Target CN and the Predicted CNs by the SNP + Verb Context Our final analysis was aimed at demonstrating how the prior SNP + verb constraint facilitates the interpretation of the CN in light of its preceding context. To do this, we computed the cosine distance between topic representation of the target CN (i.e., P CN_topic|CN ) and blend representation across the predicted CNs by the preceding SNP + verb context (i.e., P CN_topic|SNP | 186 | 221404245 | 0 | 16 |
+ verb ). This model reflects the degree of mismatch between the predicted and the target semantics of the complement. This measure can be viewed as an index of semantic evaluation as it indicates the difficulty of processing the CN in light of the preceding context. Using this model, we observed a cluster marginally significant (P = 0.058) in LH posterior MTG from 370 to 520 ms after the CN onset (Fig. 5d). The timing of this mismatch effect emerges just after the constraint effect disappears, suggesting that the constraint is evaluated against the CN as soon as the predictive process terminates and the CN is fully identified. This last piece of evidence sheds light on the predictive computations actively engaged by listeners while incrementally processing the subject, verb, and object, which are critical components of understanding the message that speaker conveys. Discussion The goal of the present study was to understand the neural dynamics of cognitive processes as listeners incrementally interpret the spoken sentences that they hear. The computations involved in this process include: 1) the activation of the semantic constraints generated by the semantic content of each word in a sentence as it is heard based on activated broad scenarios [or event structures]; 2) how and when these constraints affect processing of the upcoming speech; and 3) the incremental fine-tuning and evaluation of the semantic constraint on each new word, integrating it into the developing semantic representation. During the experiment, listeners heard sentences consisting of an SNP, followed by a verb, and then a | 187 | 221404245 | 0 | 16 |
CN where the SNP and the verb varied in the cumulative probabilistic constraints they generated on the upcoming complement. We tested for the timing and neural location of these computations by recording real-time brain activity using EMEG and analyzing the spatiotemporal fit of patterns of probabilistic topic models with source-localized neural activity across an extensive set of bilateral frontal, parietal, and temporal regions. Our summary of the results with respect to the timing of effects throughout the entire sentence reveals the rapid transitions of information processing in the brain as each word (SN, verb, and CN) incrementally unfolds over time. Such transitions highlight the underlying neural computations not only involved in processing individual words, but also in combining them with the prior context to develop a representation of the meaning of the sentence (see Fig. 3 and Fig. S3-1 in Supplementary Materials Section 3). More specifically, our results revealed the spatiotemporal dynamics of incremental semantic computations in the brain: 1) the early activation of semantic constraints generated by the SNP primarily engaged RH mid-anterior temporal areas whereas activating the non-adjacent constraint on CNs additionally recruited the RIFG and left temporal regions; 2) as the verb is recognized, the RH clusters started to decline but new clusters emerged in anterior left IFG (LIFG) and left anterior temporal lobe (LATL), actively constraining CNs based on the combined SNP + verb context; 3) as the target word (CN) starts to be heard, the locus of the SNP + verb constraint moved posteriorly into the left posterior MTG (LpMTG) and | 188 | 221404245 | 0 | 16 |
LAG which lasted until the CN is recognized. Here, we discuss our results in relation to incremental processing issues from the SNP to the CN (see Fig. 6). Early Activation of the SNP Constraints Our results revealed that different aspects of SNP constraints are activated between the point at which the SNP is recognized (i.e., the UP of SN) and its offset ∼100 ms later and that these computations recruit different brain areas. First, the SNP constraint on upcoming verbs (Fig. 5a) appeared only in midanterior portions of right middle/inferior temporal gyri (RMT-G/ITG), whereas the SNP constraint on upcoming CNs (Fig. 5b) involved more extensive regions including right ATL (RATL), RIFG, and LH temporal cortex. The important similarities and differences in the neurobiological basis of these constraints are 1) the core regions involved in constructing both types of constraints, which included RH anterior MTG/ITG regions, and 2) only the nonadjacent SNP constraint on CNs elicited activation in the RIFG which lasted all the way until the verb was recognized in Epoch 2. These regions are plausibly involved in generating and maintaining the event representations, which are naturally generated at the beginning of sentences and form a basis for semantic constraints on upcoming speech (Marslen-Wilson et al. 1993;Nieuwland and Van Berkum 2006). Various studies (Marslen-Wilson and Tyler 1980;Kamide et al. 2003) have shown that listeners use multiple sources of information at the earliest possible opportunity to establish the fullest possible interpretation of what they are hearing and demonstrate that such processes are not restricted to the syntactic | 189 | 221404245 | 0 | 16 |
structure of language. One of the prediction principles (Altmann and Mirković 2009) that underpin human language comprehension states that the mapping between the unfolding sentence and the event representation enables listeners to predict both how the language will unfold and how the real-world event will unfold, rendering prediction impossible to stand alone without incrementally developing event representations. In line with these claims, our results revealed consistent activations of RH mid-anterior temporal regions for different semantic constraints, likely reflecting the broad scenarios activated by the SNP. This claim is further supported by three major findings from our main and complementary analyses, possibly indicating that they are activated from the same set of scenarios drawn by the SNP: 1) the same activation timing for different SNP constraints around the UP of an SN; 2) a common subspace existing between different SNP constraints (see Fig. S4 in Supplementary Materials Section 4); and 3) the joint semantic constraint of the SNP on verb and CN (i.e., the early event-level constraint) elicited a significant activity pattern in the RH midanterior temporal regions as well (see Fig. S5 in Supplementary Materials Section 5). The activation of RH regions has been consistently reported when drawing coherent "message-level" interpretations in speech comprehension (Beeman and Chiarello 1998;Beeman et al. 2000;Jung-Beeman 2005), consistent with studies claiming the importance of RH in processing linguistic context (Kircher et al. 2001;Bookheimer 2002). These findings have been supported by previous ERP studies showing that the RH plays an important role in interpreting individual words with respect to a larger-scale context | 190 | 221404245 | 0 | 16 |
(Federmeier and Kutas 1999;Wlotko and Federmeier 2007;Federmeier et al. 2008), emphasizing the role of RH in processing context-driven semantic relationships (Federmeier et al. 2008). Hence, the early effect in the right temporal regions in the current study are likely related to the process of generating constraint driven by the SNP context, setting up the event-level scenarios of what is likely to be talked about (Elman 2011). However, two additional areas in the LH temporal lobe and RIFG were engaged in constraining the nonadjacent CN based on the SNP context (Fig. 5b). The two critical differences between the SNP constraints are 1) the grammatical category of constrained words and 2) adjacency with respect to the SNP context. Previous studies have shown the engagement of the LH temporal regions when processing nouns compared with when processing verbs (Siri et al. 2007;Vigliocco et al. 2011). Unlike the bilateral temporal regions, the RIFG cluster remained significant after the verb onset until the verb was recognized. Consistent with this finding, recent studies have reported RIFG as a part of the extensive network involved in constraining an upcoming word (Willems et al. 2015) and resolving semantic competition (Kocagoncu et al. 2017). More generally, this region has been involved in semantic maintenance and cognitive control (Shivde and Thompson-Schill 2004;Gajardo-Vidal et al. 2018), activating when processing an indeterminate sentence which can be interpreted in many different ways (de Almeida et al. 2016) or when encountering a word with multiple meanings in a spoken sentence (Rodd et al. 2005;Mason and Just 2007). Therefore, the SNP | 191 | 221404245 | 0 | 16 |
constraint effect in RIFG during the verb likely reflects the maintenance of the SNP semantic constraint while resolving competition as the verb is being heard. Evolving Constraint The essence of incremental speech comprehension is that each word is interpreted in a context-relevant manner and the constraint derived from the prior context is updated to be more specific and informative on the upcoming words in the sentence as more words are heard (Kuperberg and Jaeger 2016). To investigate this incremental development (i.e., how the prior SNP constraint on CNs evolves as a verb is recognized), we constructed a model that captures the semantic constraint on CNs based on the full SNP + verb context. Our results showed that the effect of the SNP + verb constraint appears at 60 ms after the verb onset in the right mid-anterior MTG/ITG regions which extended to LATL and LIFG peaking around 400 ms after the verb onset (i.e., close to the mean verb offset). As the target word (i.e., CN) is being heard, the cluster moved into more posterior areas involving LMTG and LAG, which lasted until the CN is recognized (Fig. 5c). These transitions across time may highlight differential roles engaged by these regions when constraining the CN. For example, as discussed above, the early RH temporal effect most likely reflects the broad constraint on CN, primarily set up by the SNP (i.e., in natural language comprehension, it is highly unlikely that an incoming verb is completely incongruent with the activated scenarios). Then, the ventral fronto-temporal network in LH | 192 | 221404245 | 0 | 16 |
including LIFG (BA47/45) and LATL additionally engages in constraining the CN as the verb is recognized. The broad scenarios activated by the SNP become more finetuned as the semantics of the verb is combined with the SNP context. According to the timing of LIFG-LATL activations, these regions may play an important role in resolving uncertainty by updating the sentential meaning so that it becomes more specific. Further support for this argument comes from a complementary analysis (see Fig. S3-2 and Fig. S3-3 in Supplementary Materials Section 3) showing a statistically significant reduction in entropy between the SNP constraint and the SNP + verb constraint, which reflects an important aspect of incremental speech comprehension (Hale 2006) (see Fig. 3). As LATL is directly connected to LBA47 via the uncinate fasciculus (Catani et al. 2005), our results suggest that the interaction within the anteroventral fronto-temporal network is involved in developing more informative constraint based on the combined context of SNP + verb. After the onset of the target word (CN), we observed a significant cluster moving into more posterior regions including LpMTG and LAG until around the UP of the CN. The transition and timing of this cluster may reflect the facilitatory effect of the contextual (SNP + verb) constraint on activating semantic content of the CN as these regions are often involved in activating lexical-semantic content (Hickok and Poeppel 2007) and combining it into the preceding context at both phrasal and sentential levels (Humphries et al. 2007;Schell et al. 2017;Lyu et al. 2019). Therefore, such anterior (BA47/45 | 193 | 221404245 | 0 | 16 |
and LATL) to posterior (LpMTG/LAG) transition likely reflects the top-down (i.e., the SNP + verb constraint), bottom-up (i.e., speech input of the CN) interaction, in order to generate a coherent semantic interpretation of the CN with respect to the preceding SNP + verb context. Constraint Evaluation Developing an event representation requires each word in a sentence to be interpreted in the context of the prior context. This process, in turn, requires semantically evaluating each word with respect to the prior constraint, indexed by the degree of mismatch between the context and an upcoming word. To address this issue, we tested the effect of contextual (SNP + verb) constraint on the interpretation of the target word (CN) by quantifying the degree of mismatch between the sentential context and the target word in terms of the spatiotemporal patterns of neural activity after the CN onset. We found that activity patterns in LpMTG were sensitive to the mismatch between the constrained and the actual topic representation from 370 to 520 ms (Fig. 5d). Interestingly, this timing occurred immediately after the constraint effect disappeared. In the literature, LpMTG is commonly reported in studies of semantics (Price 2010) and is typically known as the source of the N400 effect (Lau et al. 2008;Kutas and Federmeier 2011). A recent study reported predictability (e.g., "runny nose" vs. "dainty nose") estimated from corpus data modulated the N400 component in LpMTG (Lau and Namyst 2019), reducing the necessity of activating the stored lexical representation of the target word (CN in our study) when it is | 194 | 221404245 | 0 | 16 |
strongly constrained by the context (i.e., high predictability). This argument is further supported by our previous study (Lyu et al. 2019) where the semantic representation of a CN was strongly modulated by the preceding verb; for example, the verb in context (e.g., the man "ate") pruned the less relevant CN topics, allowing listeners to interpret the CN (e.g., "apple") more specifically with the CN topics that were supported by the preceding verb (e.g., topics related to "food" but not those related to "shape" or "color"). While the exact computational details of the mismatch effect remain elusive, our findings suggest that listeners not only develop semantic constraints on upcoming words but they also use these constraints to efficiently derive the context-relevant interpretation of upcoming words such as the CN. Combined with other constraint effects discussed above, these results clearly illustrate the incremental stages of predictive processing that enables listeners to construct the message-level interpretation from the three crucial components in a sentence (SNP, verb, and CN). Implications for Future Studies Previous studies have explained neuroimaging data using computational models to quantify entropy at lexical Willems et al. 2015) and phonological levels (Donhauser and Baillet 2020). In these studies, neural network models with a recurrent architecture were commonly employed to generate a context-dependent linguistic prediction as a probability distribution from which entropy can be computed. On top of these studies, the current study examined the semantic aspect of incremental language prediction using entropy of topic distributions, designed to express the co-occurrence relation among words in different grammatical categories | 195 | 221404245 | 0 | 16 |
through estimating the expected posterior of the multinomial parameters (see Supplementary Material Section 1). In this section, we motivate the choice of our computational model and approach while discussing its limitations and directions for future studies. Recent advances in the field of computer science have established a number of different computational algorithms to construct distributional semantic models, optimally reflecting the content of each lexical item in a set of latent dimensions. Perhaps, the currently most popular algorithm is the neural network training with a recurrent architecture, including recurrent neural network (RNN) and long short-term memory (LSTM). However, we chose to use topic modeling based on LDA to exploit its two critical aspects. 1. It produces a semantic vector of a word as a probability distribution over latent semantic dimensions (topics). This allows us to construct our incremental models under the Bayesian computational framework (Kuperberg and Jaeger 2016), a useful approach for understanding predictive processing in language. 2. It explicitly depicts the semantic relations between words in different positions in a sentence. Our implementation of topic modeling, which treats SNs and verbs as "documents" and CNs as "words," is specifically designed to explain semantic prediction and updates based on key words in the context. Its explanatory value as a predictive model is one of its biggest assets, making it particularly attractive in the field of psycho-and neuro-linguistics. Nonetheless, one critical limitation of this approach is that it is not an incremental model by itself, unlike RNN or LSTM. To address this issue, we introduced the method of | 196 | 221404245 | 0 | 16 |
blending a set of topic vectors based on Cloze probabilities calculated from sentence completion studies. Despite the popularity of Cloze probability as a direct behavioral measure of human prediction, its application entails high subjective bias, often affected by confounding factors such as familiarity (Smith and Levy 2011). Although Cloze probability was significantly related to corpus probability, it also significantly deviated from the corpus probability with greater entropy in responses, making Cloze a suboptimal estimate of linguistic prediction which has been successful in explaining neural responses (DeLong et al. 2005;Kutas and Federmeier 2011). Moreover, another confounding factor of Cloze is that the prediction may well be driven by a pragmatic inferential process, not purely by semantic associations. Hence, it remains controversial whether the basis of the incremental prediction is semantic or pragmatic in nature. Despite the objective and accurate probability estimates that large-scale corpora offer, there is a practical limitation of applying the corpus probability as the number of words increases in the model (i.e., increasing N in an Ngram probability). Even with large-scale corpora, the estimation of co-occurrence probability becomes very difficult with N > 3. With our stimuli containing 6-7 words before the CN (e.g., "The experienced walker chose the path"), computing a conditional probability becomes impossible. Taken together, future studies need to develop a selfexplanatory incremental model, allowing us to characterize evolving representations. Recent developments of more sophisticated models such as generative pretraining (Radford et al. 2018) have shown impressive performance on making output predictions, but their multilayered internal representations are highly complex and | 197 | 221404245 | 0 | 16 |
lack an explanatory value to provide insights into predictive processing in the human brain. Quantifying different aspects of representation that incrementally evolve over time in these models will initiate more modeldriven decoding research on brain data, shedding light on the neurobiological basis of incremental speech comprehension. At last, although we constrained our search space within a language mask to characterize linguistic aspects of predictive processing and specific computations involved in constraining upcoming words, other brain networks involved in different cognitive functions, such as attention and/or memory, may also be involved in such linguistic processes of understanding speech. With the ultimate goal of expanding our research to discourse and narrative comprehension, such whole-brain analysis will contribute to understanding the interactive nature of cognitive processes during language comprehension. Finally, there have been growing efforts to elucidate the interactive nature of cognition, bringing multiple domains of cognition such as language and memory into a unifying framework (Duff and Brown-Schmidt 2017). For example, developing an event representation involves the episodic realization (e.g., "orange" in "She peeled an orange, and ate it quickly") of a semantic type (e.g., "orange" in general). The role of such episodic-semantic interface during natural language comprehension is extensively discussed in a recent account (Altmann 2017), claiming the hippocampal structures as one of the neurobiological bases for encoding distinct episodes (McClelland et al. 1995). While we have shown that incremental predictive processes can be characterized even with such generic linguistic stimuli, we advocate the need for more specific stimuli in a narrative context in order to distinguish | 198 | 221404245 | 0 | 16 |
an episodic token from a semantic type. In this way, the stimuli would have sufficient variability to provide the distinguishable representational geometry between them, allowing researchers to investigate the interactive event dynamics beyond combinatorial semantics from semantic memory alone. As a final remark, this study focused on presenting a possible approach to investigate one of the core processes (i.e., prediction) of human event cognition during natural speech comprehension. Future studies will need to expand this research to investigate other central cognitive processes involved in understanding the event dynamics and illuminate its neurobiological underpinnings, likely recruiting multiple interactive networks in the brain outside the language network. Conclusion In this study, we demonstrated the neurobiological basis of incremental predictive language processing by characterizing the spatiotemporal dynamics of source-localized EMEG data with ssRSA using rich co-occurrence computational semantic models based on topic modeling combined with human behavioral data. To summarize our results, an extensive bilateral frontotemporo-parietal network is actively engaged in generating and developing incremental semantic constraints on upcoming words (see Fig. 6). Our results highlight the temporal progression of semantic constraint development: 1) an RH fronto-temporal network initially generates possible scenarios as the SNP is heard which, in turn, 2) recruits a LH fronto-temporal network as the scenarios get enriched as subsequent words are heard (a verb in this case), and (3) terminating in a LH posterior temporo-parietal network as the target word (CN) is recognized. To our knowledge, none of the neurobiological models of speech comprehension have explained this range of sequential temporal relationships among multiple regions | 199 | 221404245 | 0 | 16 |