亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        A situation awareness assessment method based on fuzzy cognitive maps

        2022-11-01 07:59:22CHENJunGAOXudongRONGJiaandGAOXiaoguang

        CHEN Jun ,GAO Xudong ,RONG Jia ,and GAO Xiaoguang

        1.School of Electronics and Information,Northwestern Polytechnical University,Xi’an 710072,China;2.Chongqing Institute for Brain and Intelligence,Guangyang Bay Laboratory,Chongqing 400064,China;3.Department of Data Science and AI,Monash University,Clayton VIC3800,Australia

        Abstract: The status of an operator’s situation awareness is one of the critical factors that influence the quality of the missions.Thus the measurement method of the situation awareness status is an important topic to research.So far,there are lots of methods designed for the measurement of situation awareness status,but there is no model that can measure it accurately in real-time,so this work is conducted to deal with such a gap.Firstly,collect the relevant physiological data of operators while they are performing a specific mission,simultaneously,measure their status of situation awareness by using the situation awareness global assessment technique (SAGAT),which is known for accuracy but cannot be used in real-time.And then,after the preprocessing of the raw data,use the physiological data as features,the SAGAT’s results as a label to train a fuzzy cognitive map (FCM),which is an explainable and powerful intelligent model.Also,a hybrid learning algorithm of particle swarm optimization (PSO) and gradient descent is proposed for the FCM training.The final results show that the learned FCM can assess the status of situation awareness accurately in real-time,and the proposed hybrid learning algorithm has better efficiency and accuracy.

        Keywords: situation awareness (SA),fuzzy cognitive map(FCM),particle swarm optimization (PSO),gradient descent.

        1.Introduction

        As the age of industrialization and information technology accelerates,more and more complex machines and operating systems are being invented and put into practical use.Operators of systems will be confronted with more complex and rapidly changing systems.In the decision-making process for dynamic systems,it is becoming apparent that situation awareness (SA) becomes a key component in the decision-making process for dynamic systems [1].As the name implies,SA studies the understanding and knowledge of the situation under current conditions.The concept is most widely used in aviation,and it can even be argued that the concept of SA was discovered and refined from research in the aviation sector[2].Pilots are often faced with a large amount of information to perceive and process during a flight or mission,requiring accurate perception and understanding of the information.SA theory is also applied in other key areas of cognitive loops,such as air traffic control [3],large systems operations in large manufacturing plants [4],and medical systems [5].Thus,we will not distinguish between pilots and operators later in this paper.

        It is obvious that the status of an operator’s SA is very important during the mission,and thus there exist lots of measurement methods for SA,such as situation present assessment method (SPAM) [6],SA global assessment technique (SAGAT) [7],and physiological process indices [8].However,till now,there is no method that can measure the status of SA in real-time with high accuracy.The SAGAT is accurate enough but it cannot be applied in real-time,the method of physiological process indices can measure SA in real-time but lose the accuracy.Thus,the paper’s aim is to deal with the gap between the accuracy and real-time of the SA measurement.

        As mentioned before,the SAGAT method has a high accuracy,and the physiological process indices method can be applied in real-time.The basics of the physiological indices method are that some physiological indices like eye movement,blood oxygen saturation,variance of the heart rate,etc,can reflect a human’s psychological status like worries,happiness,fatigue,and so on [9].Thus,we propose a hypothesis that the physiological indices can also predict the status of SA.The methods in the literature failed in accuracy because they are not calibrated by the accurate methods.

        Thus,this paper tries to use the result of the SAGAT,which is proved accurate already,to calibrate the SA assessment model which uses psychological indices.Firstly,an experiment is conducted to get the data,which includes the SAGAT results and the relevant physiological indices at the same time.Secondly,we train a fuzzy cognitive map (FCM) [10] using the supervised learning method,the physiological indices are features and the SAGAT results are labels.After the training process,the FCM is the final model of SA assessment.

        The reason for choosing the FCM as the model is its interpretability and ease to use [11].FCM is a new kind of intelligent model,it looks like a neural network,but it supports the circle causality [12].In most cases where the status of the operator’s SA needs to be assessed,such as a pilot is controlling an aircraft landing,a surgeon is performing a very important operation,we need to know why the SA’s status is good or bad,so the popular models like deep neural network can not be used since it is a‘black box’ and cannot be explained.The FCM’s structure and reasoning process are very similar to the neural network [13],so it has enough knowledge representation and process ability to assess the SA status,it is more powerful than many of the classic pattern recognition methods like random forest and decision tree,etc.

        The only obstacle is that the FCM does not have an efficient learning algorithm [14].This paper tackles such a problem by using a hybrid method of particle swarm optimization (PSO) and gradient descent.Using such a hybrid learning algorithm,both efficiency and learning accuracy are improved.

        The innovations of this paper are as follows:

        (i) Build an FCM-based model to assess the SA status of the operators,such a model is explainable and can measure the SA status accurately in real-time.

        (ii) Using the hybrid algorithm of PSO and gradient descent to train an FCM,exploit the PSO’s global optimization capability and the efficiency of the gradient descent simultaneously.

        The rest of this paper is organized as follows: Section 2 clarifies the definition of SA and review the measurement methods in literature;Section 3 shows the data collection method and the basics of FCM,and describe the learning algorithms;Section 4 shows the data preprocessing;Section 5 shows the learning results and does the analysis;Section 6 concludes the whole paper’s work.

        2.SA

        This section shows the relevant concepts of SA and an overview of the measurement methods of SA.

        2.1 Concepts about SA

        The concept of SA was introduced by the US military in the 1980s and went through a series of debates on the definition of SA and whether it could be defined in the 1980s to 1990s.Now the academic community basically agrees with the three-level model definition of Endsley,the chief scientist of the US Air Force: perception of elements in a given space-time,understanding of the meaning of elements,and prediction of the state of elements over a small period of time in the future [1].

        This definition is based on the dynamic systems decision model,which shows that SA is the preparation of a person before making a decision and is also the interface between the person and the external environment.It is important to note that SA is a state of knowledge,rather than a process of acquiring this knowledge.The process of acquiring SA is generally called SA,or achieving,maintaining,and acquiring SA.SA also does not include all of a person’s knowledge,because the process of SA is considered to take place in the person’s working memory,i.e.,short-term memory,but SA includes the relevant knowledge that is relevant to the task and is recalled by the current operator.That is,SA does not include knowledge stored in the operator’s long-term memory that is not recalled in the course of performing the task.SA should also be distinguished from operator decision making and task performance,as experienced operators can make poor decisions and thus perform poorly when their SA is poor,while inexperienced novices can perform poorly with good SA due to their inexperience.Of course,factors that have an impact on SA,such as working memory,and stress,should be distinguished from SA.

        From Fig.1,it can be seen that SA is divided into three levels in progressive order: perception,comprehension,and projection.The higher levels depend on the formation of the lower levels.

        Fig.1 Three levels structure of SA [15]

        (i) Level 1: Perception

        The first step in gaining SA is to perceive the state of elements in the current environment,the attributes,and the current key elements.For example,a pilot should perceive aircraft-related parameters such as altitude,speed,heading and angle,environment-related parameters,the presence of mountains,and speed.

        (ii) Level 2: Comprehension

        The second level of SA is the understanding of the current situation.The disordered information of the first level is sorted out and integrated in a relevant way,processing the interrelationships between the elements and thus obtaining specific corresponding information,for example,the type of aircraft based on its maneuvering characteristics.From the viewpoint of the target,the unimportant elements are eliminated.With the same level of information at the first level,an experienced operator is likely to get better information at the second level than a novice.

        (iii) Level 3: Projection

        With the second level of information,the operator is ideally able to make certain predictions about the future state of the system based on his understanding of it.For example,the pilot predicts the aircraft will hit the mountains if the aircraft maintains the heading now,in order to avoid that,he/she must take necessary actions.

        2.2 An overview of assessment methods for SA

        There are a number of evaluation methods for SA [16].These methods fall broadly into the following categories:

        (i) SA requirement analysis

        By the goal directed task analysis (GDTA) method,the needs of the SA are analyzed and the individual demand states are then measured separately [17].

        (ii) Freeze probe techniques

        At a random point in time,while the operator is performing a task,the operator is completely removed from the task situation and the operator’s SA state is then immediately and rapidly measured by using the appropriate scale.The SAGAT is a very famous method that can be seen as a freeze probe technique [18].

        (iii) Real-time probe techniques

        The difference between the real-time probe technique and the frozen probe technique is that the operator is not removed from the task situation and the surveyor evaluates the information by directly asking the SA about it,based on parameters such as the correctness of the operator’s answers and the operator’s reaction time,but the evaluation process can interfere with the operator’s SA state,resulting in inaccurate measurement results.The SPAM which was proposed by Durso et al.and used to assess the SA of air traffic controller (ATC) personnel,is representative of this approach [19].

        (iv) Self-rating techniques

        After the operator has performed the task,the operator is evaluated by using the relevant SA scale.Clearly,this is a post-test measurement method that avoids measuring the effects of the test process on the state of SA (in fact,as confirmed by the experiments of Endsley et al.The measurement during the SAGAT did not also have any influence on the operator in a statistically significant way).SA rating techniques,proposed by Taylor in 1990,represent this type of approach,using a 7-point scale in which operators self-assess after completing a task on 10 dimensions [20].The advantage of this type of method is that the operator performs the task without interference,but the pilot’s ability to assess himself objectively and accurately greatly affects the reliability and validity of the measurement.

        (v) Observer-rating techniques

        In the observer scoring technique,the operator,as the observed,is only responsible for performing the task and does not interact with the assessor and is thus not disturbed.The assessor consists of a subject matter expert(SME),who assesses the SA of the observed person by looking at the operator’s performance,the behavior associated with the SA.The problem is whether the SME can accurately determine the SA of the observed person because the SME is not the observed person and he/she has no way of knowing what is going on in the mind of the observed person [21].This makes the reliability and validity of the method extremely challenging.

        (vi) Performance measures

        Gugerty in 1997 measured the SA of drivers in a simulated driving environment using three performance indicators: hazard detection,blocking vehicle detection,and collision avoidance [22].An expert can perform well when his SA is poor,while a novice can make poor decisions when his SA is good.That is,performance,not SA,is measured.

        (vii) Process indices

        The process of constructing one’s own SA is accompanied by process indicators,such as observing relevant information,scanning for relevant information,and reasoning about the necessary information.Typical methods include verbal protocol analysis,where subjects report their SA in the verbal form in real-time as they perform the task.The use of eye-tracker and physiological data to determine the operator’s SA is also part of this approach[23].

        3.Method and model

        The operator’s level of SA is assessed in real-time using the operator’s eye movement and physiological data.The data used is the operator’s eye movements and physiological data while operating the aircraft in a cruise mission on the flight simulation experiment platform (Fig.2),and the level of SA is assessed by a SAGAT,where the operator is completely isolated from the mission environment and assessed by a questionnaire as a label for the learning data.

        Fig.2 Simulation experiment platform

        After acquiring the relevant data,the appropriate preprocessing is carried out to extract the appropriate indicators,and feature selection is carried out appropriately.Subsequently,the filtered features and the test results of SAGAT are used as training data and the corresponding models and learning methods are selected for training.

        3.1 Data collection

        The data is obtained as follows:

        (i) Flight simulations are performed by subjects on a flight simulation experiment platform,during which the subjects’ eye movement data and physiological data are measured in real-time by oculomotor and physiological instruments.

        (ii) The experimenter calls off the subject’s task after a random time (longer than 6 s),the subject is removed from the simulation environment,and both the oculomotor and physiograph measurements are ended,saving the data from this measurement.

        (iii) Immediately after the subject is removed from the simulation environment,the subject’s level of SA is tested on a task-specific SAGAT and the results of this measurement are saved.

        3.2 FCM

        The FCM was proposed by Kosko in 1986 as a soft computing system.It is a combination of fuzzy logic and neural networks.It consists of a number of variable concepts and causal relations that illustrate the causal relationships between these concepts.Nodes represent concepts and directed edges with weights indicate the connections between these nodes.A simple FCM model is shown in Fig.3.

        Fig.3 A simple FCM

        Each concept node has its own value,which is always a fuzzy value that represents how well the state matches the concept.Values in the interval [1,1] or [0,1] are usually chosen.

        The weight is also a fuzzy number such asw12shown in Fig.3,and we usually choose a real number between the interval [-1,1] as the weight value.A negative weight means that the presynaptic and postsynaptic nodes change in opposite directions during stimulation;a positive weight means that they change in the same direction.And the absolute value of the weights represents the magnitude of the effect of the presence of the two nodes.

        Not only does the FCM have a powerful knowledge representations capability,but it can also use the information it collects to deduct the value of a concept.For example,in each iteration,each concept can be given a new value from the following equation:

        whereWis the weight matrix andCis the node vector,S(·) is the activation function to compress the node value into [0,1] or [1,1],when the node value is in [0,1],the activation function is

        when the node value falls in [1,1],S(·) is the hyperbolic tangent.

        The positive parameter λ is used to control the steepness of the curve.The lager λ is,the steeper the curve is.

        3.3 The learning algorithm of FCM

        To the state of the art,learning algorithms for FCMs are mainly group-based heuristics and hybrid learning methods used to combine expert knowledge with knowledge from data.Heuristic learning methods have become the dominant algorithms for FCMs to extract expert knowledge from data [24].However,heuristic learning algorithms,such as genetic algorithms and particle swarm algorithms,can give weights that meet the requirements of the learning task but have long learning times and poor learning accuracy.Therefore,many scholars have considered using non-heuristic learning algorithms to learn FCMs.Some scholars have already tried to use gradient descent to learn FCMs [11],but their proposed method requires data after each iteration as labels,while usually,the data of complex systems,in reality,are only the final output results of the system,for the intermediate process data,it is almost difficult to obtain,so such a gradient descent method is difficult to get realistic applications.

        By using a numerical approach to calculate the gradient,the problem of the gradient being difficult to solve analytically due to the extreme complexity of the objective function is bypassed,thus eliminating the need to use intermediate data from the operation of the system as labels to learn FCMs.

        Due to the complexity of the objective function,it is very easy for a single gradient descent method to fall into a local optimum.In fact,through simulation,the gradient descent method falls into an unacceptable local optimum almost every time,resulting in underfitting and poor learning.Therefore,a combination of the PSO algorithm and the gradient descent method is eventually used to avoid falling into a local optimum prematurely: the particle swarm algorithm is used to obtain a solution that has already reached a certain level of accuracy,and this solution is then used as the initial value for the gradient descent method to learn.That is coarse tuning using PSO and fine-tuning using gradient descent.The comprehensive performance of such a hybrid learning algorithm is the best by simulation.

        3.3.1 FCM learning algorithm based on gradient descent

        Applying gradient descent to FCM learning is to optimize the objective function using gradient descent,with the weights updated by the following formula:

        Δwi jis the gradient,η is the learning step size,the size of its value affects the learning rate and accuracy,the value ofηcan be dynamically adjusted according to the number of learning steps,and the smallest possible step size is used when the number of steps is large enough or close to the optimal solution.f(W) is the objective function,and the following function is used as the objective function to learn the FCM.

        Due to the complexity of FCMs,solving Δwi jby analytic methods is almost impossible,so the gradient vector is obtained directly through the definition of partial derivatives (6).

        where δ is taken to be as close to 0 as possible within a computer-processable accuracy.To make the calculated partial derivatives as accurate as possible,δ=10-14is used in this paper.The algorithm for calculating the gradient is shown in the function Gradient(·).

        With the iteration of the gradient descent method,the objective function will gradually approach the suboptimal value,and the learning stepηshould not be too large at this time,otherwise,it will easily oscillate back and forth around the suboptimal value,so the learning step needs to be adjusted dynamically,so the following dynamic adjustment strategy is set:

        Ifais set to a positive number less than 1,the learning step η will become smaller and smaller as the learning result gets closer to the suboptimal solution,and the learning accuracy will increase.ais usually taken to be between [0.9,0.99].

        The gradient descent method is particularly prone to local optimality when solving for the minimum of a complex function.Therefore,when the gradient is detected to have fallen to 0 and the learning effect is still unsatisfactory,the initial value and learning step should be reset and the search for optimality restarted.Generally,a thresholdσof the objective function is set:

        where η0denotes the initial learning step,which is generally taken as 0.1,and rand(size(W)) denotes the regeneration of the initialW.In summary,the learning algorithm for FCMs based on gradient descent is shown in Algorithm 1.

        3.3.2 FCM learning algorithm based on PSO

        This section focuses on a particle swarm-based FCM learning algorithm.In PSO,each possible solution to the optimal problem is imagined as a bird,called a “particle”.There aremparticles ind-dimensional space,and at a given moment,the position of particleiis

        The best historical position through which the population has passed is

        The bird will decide its speed in the process of feeding based on its own experience and the position of other birds in the flock.Based on the current position and speed,it can get the position at the next moment,so that each bird constantly updates its speed position by learning from itself and the flock.The update rule for theith particle from momenttto momentt+1 can be summarized by the following equation:

        whereIis the inertia factor,usually taken as 1;C1andC2are the learning factors,usually taken as 2;R1andR2are random numbers between (0,1).The above equation is updated for one particle in the population,and all particles in the population are updated in turn using the above equation.

        For theN-node FCM,similar to the gradient descent method,each particle represents all values of the weight matrix.

        Each step of the update determines whether the velocity of each particle is within the specified range,and forces individuals whose velocity exceeds the maximum velocity of the particle back into the range (-Vmax,Vmax).In the later stages of learning an FCM using the particle swarm algorithm,the velocity range of each particle needs to be contracted,similar to the gradient descent method of dynamically adjusting the learning step,and a strategy for dynamically adjusting the velocity range of the particles is designed as follows:

        With a set toapositive number less than 1,the learning stepηgets smaller and smaller as the learning result gets closer to the suboptimal solution,and the learning accuracy increases.ais usually taken to be between [0.9,0.99].In summary,the algorithm for learning FCMs using the particle swarm algorithm is shown in Algorithm 2.

        3.3.3 Hybrid FCM learning algorithm based on PSO and gradient descent

        The particle swarm algorithm has a better global search capability compared to the gradient descent method and can converge quickly to near the optimal or sub-optimal solution.However,as it is an evolutionary algorithm,it relies on comparing the fitness of a large number of solutions to find the optimal or suboptimal solution,so it is characterized by the fact that the value of the objective function decreases very rapidly at the beginning of the iteration,but the value of the objective function changes very little in subsequent iterations,often hovering around the optimal solution and taking up a lot of computation time.The gradient descent method,on the other hand,is a commonly used optimization algorithm with a very clear objective: to update the variables in the direction of a decreasing error function.However,the selection of the location of the initial values of the variables has a crucial impact on the results of the gradient descent method,and even directly affects the effectiveness of the algorithm.The objective function of the FCM learning algorithm is very complicated and the images are rugged,so the selection of the initial weight values is very important.Combining the above characteristics of the two algorithms,the two algorithms can be combined to further improve the accuracy: the weight matrix obtained from the optimization of the particle swarm algorithm is used as the initial weight matrix of the gradient descent method,which is equivalent to the particle swarm algorithm finding a solution near the optimal solution for the gradient descent method with its superior global search capability,and to avoid the particle swarm algorithm hovering near the optimal solution,followed by the gradient descent method to further fine-tune the weight matrix so that the weight matrix is as close to the true value as possible.

        In order to achieve the desired value for both speed and accuracy of the algorithm,it is necessary to find the optimal point at which the two algorithms combine,i.e.,the minimum number of iteration steps near which the particle swarm algorithm finds the global optimal solution when the solution output by the particle swarm algorithm not only takes less time but also tends to make the result of the gradient descent method closer to the true value.A relatively large threshold is generally set,and when the value of the objective function is less than this threshold,PSO is no longer run,and instead,the training parameters are passed to the gradient descent method to continue the learning process.The FCM learning algorithm based on PSO and gradient descent is shown in Algorithm 3,and the flow chart of the process is shown in Fig.4.

        Fig.4 Flow chart of the FCM learning algorithm based on PSO and gradient descent

        4.Data preprocessing

        The raw data obtained from the eye tracker and physiological instruments consisted of 30 columns,including physiological data (6 columns),oculomotor data (22 columns),and time markers (2 columns).The average binocular coordinates (2 columns),time markers (2 columns),binocular coordinates (4 columns),and binocular sweep (gaze) marker columns (4 columns),which are not relevant to the learning purpose,are now eliminated,leaving only the physiological data (6 columns),binocular pupil data (6 columns),binocular sweep time (2 columns),binocular gaze time (2 columns),and binocular sweep angle (2 columns).Also,as the sampling frequency of each sensor is different,the sensors with low sampling frequency in the table have many missing values in the corresponding columns,so the data are classified according to the sampling frequency of the sensors.

        (i) Respiratory-electromyogram (EMG) data.Includes EMG,thoracic respiration,and diaphragmatic respiration.

        (ii) Heart rate-blood oxygen saturation (SaO2).Includes heart rate (upper and lower limits),SaO2.

        (iii) Ocular motility sensor.Includes left (right) pupil height,pupil width,pupil area,left (right) eye gaze marker,left (right) eye sweep marker,left (right) eye sweep time,left (right) eye sweep angle of view.

        After classifying the data into the three types mentioned above,the missing values due to the sampling frequency are removed and the data from the first 20 s of the“freezing” moment is selected as valid data according to the time marker.In addition to missing values due to low sampling frequencies,there are also missing values due to temporary equipment failures,which are interpolated using K-nearest neighbor (K-NN) for random missing values [25] in continuous data sequences and rejected for large missing values due to temporary equipment failures.

        The data from each source is processed and relevant features are extracted below.

        4.1 Respiratory-EMG sensor data

        (i) Electromyographic data.The surface electromyography (sEMG) of the long wrist extensors of the arm used to control the grip is collected during the experiment and the EMG data collected during the experiment is shown in Fig.5.

        Fig.5 Example of electromyographic data

        The overall trend of the EMG data indicates the movement performed,and the magnitude of the EMG reflects the degree of muscle tension.In conducting the state of postural awareness assessment,it is considered that the degree of muscle tension partially reflects the state of SA[26],and therefore the metric relating to the magnitude of EMG is selected [27-29].The following indicators are extracted as alternative features.

        i) Integral of absolute value (IAV)

        ii) Mean square value (MSV)

        The MSV represents the energy of the signal and is the second-order moment of the signal.

        iii) Variance (VAR)

        The variance represents the dynamic component of the signal energy (the mean square is the static component)and is the second-order center moment.

        iv) Root mean square (RMS)

        v) Willison amplitude (WAMP)

        Threshold is often set as 50-100 μV,in this experiment,it is 50 μV.

        (ii) Thoracic respiration signal (TRS) and diaphragmatic respiration signal (DRS)

        As both human thoracic respiration and diaphragmatic respiration are measured in relation to human respiration,the data for thoracic respiration and diaphragmatic respiration are similar and their original data are shown in Fig.6.Because of this similarity,the same treatment is adopted.It is found that there are many burrs in the raw data due to measurement errors,so the data need to be filtered to remove the noise.In this paper,the data are smoothed by using the Kalman filter [30] and the observed data variance is chosen to be 1.The data after noise removal is shown in Fig.7.

        Fig.6 Raw thoracic respiratory data and diaphragmatic respiratory data

        Fig.7 Thoracic respiratory data and diaphragmatic respiratory data after Kalman filtering

        As with EMG data,absolute mean,mean square,variance,and RMS values are selected as alternative features,as well as periodT,frequencyf,peakP,troughV,and mean peak to peak (MPP),due to the apparent periodicity of the thoracic (diaphragmatic) respiratory data.

        4.2 Heart rate-blood SaO2 sensor data

        (i) Blood SaO2 data.A finger-held photoelectric sensor is used to calculate hemoglobin concentration and blood SaO2 by simply placing the sensor on a human finger and using the finger as a transparent container for hemoglobin,using red light at a wavelength of 660 nm and near-infrared light at 940 nm as the incoming light source,and measuring the light transmission intensity through the tissue bed.The raw data of the measured blood SaO2 are shown in Fig.8.

        Fig.8 Raw data of the measured blood SaO2

        As can be seen from Fig.8,the data is basically smooth and the noise has little effect on the data,so no filtering is performed.At the same time,it can be found that the SaO2 signal shows obvious periodicity,which is caused by the pumping of the heart.It can be found that within each cycle there are also two small peaks and troughs,which are called sub-crests and sub-troughs in this paper.At the same time,a cycle reflects the duration of the heartbeat,so the minimum number of cycles within the time frame can be counted to calculate the de-rate data.Thus,the following alternative indicators can be obtained from SaO2: the mean,variance,effective value,and IAV,as well as the mean and variance of a peak,trough,cycle,heart rate,peak-peak,sub-crests and subtroughs’ cycle,frequency,peak,trough,and peak-peak values (24 alternative indicators in total).

        (ii) Heart rate data.The heart rate data in this experiment gives the upper and lower limits of heart rate respectively,and its mean and variance are selected as alternative indicators (4 alternative indicators in total).

        4.3 Eye-tracker data

        The eye-tracker data mainly include pupil height,pupil width,pupil area,left (right) eye gaze marker,left (right)eye sweep marker,left (right) eye sweep time and left(right) eye sweep angle of view.The data are not significantly periodic or regular,but there are more missing data and outliers.

        (i) Processing of pupil height,pupil width,and pupil area data for both eyes.

        To facilitate the processing and visualization of the data,the z-score is first used.

        wherexis the observed value,μis the overall mean,andσis the overall standard deviation.Considering a large number of outliers in the data given by the oscillograph,in order to eliminate the influence of outliers on z-score normalization,a robust scale is used to calculate the mean and standard deviation using the data between the first and third quartiles,followed by z-score normalization.The distribution of the standardized data is shown in Fig.9.

        Fig.9 Box plot of pupil data distribution for both eyes

        The outliers in the data are then processed.Outliers are first detected by the Tukey test: outliers are detected using the interquartile range (IQR),which is the difference between the upper quartile and the lower quartile.Using 1.5 times IQR as standard,points exceeding 1.5 times the IQR in the upper quartile,or 1.5 times the IQR in the lower quartile,are specified as outliers [31].After labeling these outliers,the K-NN algorithm is used to interpolate the outliers and missing values in the dataset.The Euclidean distance is used to find the nearest sample point.Each missing feature is estimated using the values of thennearest sample points,which are weighted according to their distance from the sample point to be estimated.When the number of available nearest sample points is less thann,the average value of the feature is used for interpolation.In this paper,nis taken to be 5.The distribution of the data after interpolation using KNN is shown in Fig.10.

        Fig.10 Box plot of pupil data distribution for both eyes (after processing outliers and missing values)

        After dealing with outliers and missing values,the mean and variance of the above indicators are calculated as alternative characteristics.

        (ii) Gaze and sweep data.The sweep and gaze data focus on their number,maximum and minimum durations,average duration and variance of gaze (sweep),and duration and variance of sweep angle.

        After processing all of the above data,a total of 101 alternative features are obtained.Of these,there are 40 features from eye movement data,five features from EMG,14 features each from thoracic and diaphragmatic respiration,and 28 features from heart rate-SaO2 sensors.The features are considered for filtering to prevent dimensional catastrophes caused by high dimensionality.

        4.4 Feature selection

        The above features are filtered by using the random forest [32].Using the 101 features obtained above,the results obtained from the SAGAT scale are used as labels,subjected to maximum-minimum normalization,and then used as training data for the random forest,which is trained with parameters set to 100 decision trees,each with a maximum depth of 10.Finally,the importance of each feature is obtained as shown in Fig.11.

        Fig.11 Importance of each feature

        The horizontal axis of Fig.11 indicates the number of each feature.After eliminating features with importance less than 0.005,59 features remain,and the sum of the importance of these 59 features is 0.949 7.

        5.Results and analysis

        The 59 features and 1 column of labels are used as each node of the FCM,i.e.,a FCM with 60 nodes is constructed by the learning algorithm.In the learning data,missing data due to a sensor failure is treated as 0.A total of 200 sets of data are collected for the experiments,and 80% are randomly used as the training set and the remaining 20% as the test set.The PSO algorithm,the gradient descent algorithm,and the hybrid learning algorithm of PSO and gradient descent are used to construct the FCM respectively.

        5.1 Metrics for the results

        The effectiveness of the model fitting is evaluated by the following metrics.

        5.1.1 Explained variance (EV) score

        5.1.2 Maximum error (ME)

        This metric calculates the maximum residual,i.e.,the ME between the predicted and true values.In a fully fitted single-output regression model,the ME on the training set is 0,although in the real world this is almost impossible.This indicator shows the degree of error in the fit of the model.Ifis the predicted value of theith sample andyiis the corresponding true value,then the ME is defined as

        5.1.3 Mean absolute error (MAE)

        5.1.4 Mean squared error (MSE)

        MSE is defined as

        5.1.5 Median absolute error (MedAE)

        MedAE is defined as

        The median of the residuals as a measure of performance is highly robust and does not change drastically due to the presence of outliers.

        5.1.6 Coefficient of determination,R2

        The coefficient of determination indicates the proportion of the variance in the dependent variable that can be explained by the independent variables in the model,providing an indication of the goodness of fit and thus a measure of the model’s performance in predicting the unobserved sample by the proportion of variance explained.The best possible score for the decidability coefficient is 1.0,but it can also be negative.A model whose predicts value is always the expectation ofy,ignoring input features,has a score of 0.

        5.2 The results

        The values of the objective function with the number of iteration steps for learning FCMs using the PSO algorithm are shown in Fig.12.

        Fig.12 Learning FCM objective function values with iteration steps using the PSO algorithm

        The objective function for learning FCMs using gradient descent and the change in thel2norm of the gradient with the number of iterations is shown in Fig.13.

        Fig.13 Aim function values and l2 values optimized by gradient descent method

        As the learning effect of gradient descent is very dependent on the initial value,the learning effect is poor and unusable when the initial value is randomly given,so the required accuracy is achieved only after several reselections of the initial value are made.From Fig.13,we can see that the gradient and the objective function go through several large ups and downs before reaching the required accuracy.

        The variation of the objective function value with the number of iteration steps using a learning algorithm combining the PSO learning algorithm and gradient descent,and the variation of the number of second parity of the gradient with the number of iteration steps are shown in Fig.14.

        Fig.14 Objective function curve and gradient (l2) curve for FCM using PSO and gradient descent

        As can be seen from Fig.14,the objective function value is optimized below a threshold by the PSO algorithm in the first stage,followed by a gradient descent stage for fine-tuning.

        The performance metrics of each learning algorithm on the training set are shown in Table 1,and on the test set in Table 2.

        Compare the metrics for the training set and test set.It can be found that the PSO reached the best learning result,for its EV,andR2are perfect and the ME,MAE,and MSE are the lowest,but its running time is the longest.Almost perfect results in the training set may indicate over-fitting,and the long-running time shows a relatively low efficiency.As for the gradient descent,it can be found that it is the least effective learning algorithm,for its lowestR2and EV,the highest ME,MAE,and MedAE.Now scrutinize the hybrid algorithm,firstly,the PSO in the hybrid algorithm uses about one-third of the time of pure PSO to get a relatively good result,and then the gradient descent algorithm finetunes the results,using about 900 0 s,to get a good result.The hybrid algorithm obtains the almost same quality as the pure PSO,but the running time is about two-third of the pure PSO.Compared with Table 1 and Table 2,it is easy to check over-fitting or under-fitting.It can be found that all of the algorithms may confront a little bit of over-fitting,for EV andR2,all of them decrease a little bit;for the ME,MAE,MSE,and MedAE,all of them increase a little bit.However,such little change of efficiency causes a negligible impact.

        Table 1 Performance metrics for each learning algorithm (training set)

        Table 2 Performance metrics for each learning algorithm (test set)

        6.Conclusions

        This paper focuses on the SA status measurement based on the FCM model.To the state of art,only the SAGAT method to measure the SA status has good statistical validity.However,the SAGAT cannot be applied in realtime since the operators have to be interrupted from the missions.In this case,lots of missions that need the operator’s real-time SA status cannot be solved.To tackle such a problem,we use the eye-tracker and physiological data to predict the SAGAT results,since the data comes from the eye-tracker and physiological instruments are highly relevant to human’s cognitive status and can be obtained in real-time.The learning machine is chosen as FCM,because of its interpretability and ease to use.However,there still not exists an efficient enough learning method for FCM.Thus,we apply hybrid methods of PSO and gradient descent method to learn the FCM and obtain good results.

        To summarize,the contributions of this paper are construction of a 60-node FCM to assess the operators’ SA status in real-time,and using the hybrid algorithm of PSO and gradient descent to learn the FCM in high efficiency.

        The constructed FCM contains the knowledge from data and indicates the links between nodes,which can help us to understand how the movement of the eye and the relevant physiological signal is related to the SA status.Also,the model can assess the operator’s SA status in real-time and do not interrupt the missions since the devices can be embedded in the platform,even in the clothes of the operator.

        The future work will focus on how to deal with the uncertainty of the data and the dynamic environment which FCM cannot handle.For example,the data of heart rate comes from different sensors may be inconsistent,and the dynamic situation change may cause a sudden loss of SA of the operator,which may not be caught by the FCM.One of the solutions is to use relevant FCM extensions,such as dynamic fuzzy generad grey congnitive maps,to adapt the uncertainty data and dynamic environment,in this way,the learning algorithms for FCM extensions also need to be designed.

        久久无码av三级| 午夜国产视频一区二区三区| 东京热人妻系列无码专区| 天堂草原电视剧在线观看图片高清| 欧美a级在线现免费观看| 一区二区三区四区四色av| 久久久国产精品樱花网站| 国产在线精品亚洲视频在线| 久久亚洲精品国产av| 国产精品兄妹在线观看麻豆 | 国产av影片麻豆精品传媒| 日本激情视频一区在线观看| 精品国产乱子伦一区二区三| 国内精品久久久久国产盗摄| 欧美成人一区二区三区在线观看| 国产av天堂亚洲国产av麻豆| 国产亚洲一二三区精品| 一本色道久久爱88av| 国内精品九九久久久精品| 日韩在线手机专区av| 19款日产奇骏车怎么样| 亚洲av中文无码乱人伦在线播放| 中文字幕天堂在线| 中文字幕视频二区三区| 久久无码潮喷a片无码高潮| 啪啪无码人妻丰满熟妇| 色播在线永久免费视频网站| 亚洲午夜精品第一区二区| 国产精品无码久久综合网| 成人区人妻精品一区二区不卡网站| 国产偷闻隔壁人妻内裤av| 亚洲亚色中文字幕剧情| 亚洲精品乱码久久久久久日本蜜臀 | AV无码人妻一区二区三区牛牛| 亚洲一区二区国产一区| 被黑人猛烈30分钟视频| 日本不卡视频网站| 日本一区二区三区激视频| 亚洲av成人片无码网站| 推油少妇久久99久久99久久| 亚洲一区二区三区99区|