Detecting the emotional state of others from facial expressions is a key ability in emotional competence and several instruments have been developed to assess it. Typical emotion recognition tests are assumed to be unidimensional, use pictures or videos of emotional portrayals as stimuli, and ask the participant which emotion is depicted in each stimulus. However, using actor portrayals adds a layer of difficulty in developing such a test: the portrayals may fail to be convincing and may convey a different emotion than intended. For this reason, evaluating and selecting stimuli is of crucial importance. Existing tests typically base item evaluation on consensus or expert judgment, but these methods could favor items with high agreement over items that better differentiate ability levels and they could not formally test the item pool for unidimensionality. To address these issues, the authors propose a new test, named Facial Expression Recognition Test (FERT), developed using an item response theory two-parameter logistic model. Data from 1,002 online participants were analyzed using both a unidimensional and a bifactor model, and showed that the item pool could be considered unidimensional. The selection was based on the items' discrimination parameters, retaining only the most informative items to investigate the latent ability. The resulting 36-item test was reliable and quick to administer. The authors found both a gender difference in the ability to recognize emotions and a decline of such ability with age. The PsychoPy implementation of the test and the scoring script are available on a Github repository.

Development and Validation of the Facial Expression Recognition Test (FERT)

Passarelli, Marcello;Masini, Michele;Bracco, Fabrizio;Chiorri, Carlo
2018-01-01

Abstract

Detecting the emotional state of others from facial expressions is a key ability in emotional competence and several instruments have been developed to assess it. Typical emotion recognition tests are assumed to be unidimensional, use pictures or videos of emotional portrayals as stimuli, and ask the participant which emotion is depicted in each stimulus. However, using actor portrayals adds a layer of difficulty in developing such a test: the portrayals may fail to be convincing and may convey a different emotion than intended. For this reason, evaluating and selecting stimuli is of crucial importance. Existing tests typically base item evaluation on consensus or expert judgment, but these methods could favor items with high agreement over items that better differentiate ability levels and they could not formally test the item pool for unidimensionality. To address these issues, the authors propose a new test, named Facial Expression Recognition Test (FERT), developed using an item response theory two-parameter logistic model. Data from 1,002 online participants were analyzed using both a unidimensional and a bifactor model, and showed that the item pool could be considered unidimensional. The selection was based on the items' discrimination parameters, retaining only the most informative items to investigate the latent ability. The resulting 36-item test was reliable and quick to administer. The authors found both a gender difference in the ability to recognize emotions and a decline of such ability with age. The PsychoPy implementation of the test and the scoring script are available on a Github repository.
File in questo prodotto:
File Dimensione Formato  
Development and Validation of the Facial Expression Recognition Test (FERT).pdf

accesso aperto

Tipologia: Documento in Pre-print
Dimensione 777.78 kB
Formato Adobe PDF
777.78 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/919765
Citazioni
  • ???jsp.display-item.citation.pmc??? 7
  • Scopus 14
  • ???jsp.display-item.citation.isi??? 14
social impact