Objective: A central component of search and rescue missions is the visual search of survivors. In large parts, this depends on human operators and is, therefore, subject to the constraints of human cognition, such as mental fatigue (MF). This makes detecting MF a critical step to be implemented in future systems. However, to the best of our knowledge, it has seldom been evaluated using a realistic visual search task. In addition, an accuracy discrepancy exists between studies that use time-on-task (TOT)-the popular method-and performance metrics for labels. Yet, to our knowledge, they have never been directly compared.Approach: This study was designed to address both issues: the use of a realistic task to elicit MF during a monotonous visual search task and the labeling type used for intra-participant fatigue estimation. Over four blocks of 15 min, participants had to identify targets on a computer while their cardiac, cerebral (EEG), and eye-movement activities were recorded. The recorded data were then fed into several physiological computing pipelines.Main results: The results show that the capability of a machine learning algorithm to detect MF depends less on the input data but rather on how MF is defined. Using TOT, very high classification accuracies are obtained (e.g. 99.3%). On the other hand, if MF is estimated based on behavioral performance, a metric with a much greater operational value, classification accuracies return to chance level (i.e. 52.2%).Significance: TOT-based MF estimation is popular, and strong classification accuracies can be achieved with a multitude of sensors. These factors contribute to the popularity of this method, but both usability and the relation to the concept of MF are neglected.
Keywords: mental fatigue; passive brain–computer interface; performance estimation; physiological computing; visual search.
© 2024 IOP Publishing Ltd. All rights, including for text and data mining, AI training, and similar technologies, are reserved.