Detecting the emotional state of others from facial expressions is a key ability in emotional competence and several instruments have been developed to assess it. Typical emotion recognition tests are assumed to be unidimensional, use pictures or videos of emotional portrayals as stimuli, and ask the participant which emotion is depicted in each stimulus. However, using actor portrayals adds a layer of difficulty in developing such a test: the portrayals may fail to be convincing and may convey a different emotion than intended. For this reason, evaluating and selecting stimuli is of crucial importance. Existing tests typically base item evaluation on consensus or expert judgment, but these methods could favor items with high agreement over items that better differentiate ability levels and they could not formally test the item pool for unidimensionality. To address these issues, the authors propose a new test, named Facial Expression Recognition Test (FERT), developed using an item response theory two-parameter logistic model. Data from 1,002 online participants were analyzed using both a unidimensional and a bifactor model, and showed that the item pool could be considered unidimensional. The selection was based on the items' discrimination parameters, retaining only the most informative items to investigate the latent ability. The resulting 36-item test was reliable and quick to administer. The authors found both a gender difference in the ability to recognize emotions and a decline of such ability with age. The PsychoPy implementation of the test and the scoring script are available on a Github repository.
|Titolo:||Development and Validation of the Facial Expression Recognition Test (FERT)|
|Data di pubblicazione:||2018|
|Appare nelle tipologie:||01.01 - Articolo su rivista|
File in questo prodotto:
|Development and Validation of the Facial Expression Recognition Test (FERT).pdf||Documento in Pre-print||Open Access Visualizza/Apri|