Images increasingly constitute a significant portion of internet content, encoding even more complex meanings. Recent studies have highlighted the pivotal role of visual communication in the spread of extremist content, particularly that associated with right-wing political ideologies. However, the capability of machine learning systems to recognize such meanings, sometimes implicit, remains limited. To enable future research in this area, we introduce and release VIDA - the Visual Incel Data Archive, a multimodal dataset comprising visual material and internet memes collected from two central Incel communities (Italian and Anglophone) known for their extremist misogynistic content. Following the analytical framework of Shifman (2014), we propose a new taxonomy for annotation across three primary levels of analysis: content, form, and stance (hate). This allows for associating images with fine-grained contextual information that helps identify the presence of offensiveness and a broader set of cultural references, enhancing the understanding of more nuanced aspects of visual communication. In this work, we present a statistical analysis of the annotated dataset and discuss annotation examples and future lines of research

VIDA: The Visual Incel Data Archive. A Theory-oriented Annotated Dataset To Enhance Hate Detection Through Visual Culture.

Selenia Anastasi;
2024-01-01

Abstract

Images increasingly constitute a significant portion of internet content, encoding even more complex meanings. Recent studies have highlighted the pivotal role of visual communication in the spread of extremist content, particularly that associated with right-wing political ideologies. However, the capability of machine learning systems to recognize such meanings, sometimes implicit, remains limited. To enable future research in this area, we introduce and release VIDA - the Visual Incel Data Archive, a multimodal dataset comprising visual material and internet memes collected from two central Incel communities (Italian and Anglophone) known for their extremist misogynistic content. Following the analytical framework of Shifman (2014), we propose a new taxonomy for annotation across three primary levels of analysis: content, form, and stance (hate). This allows for associating images with fine-grained contextual information that helps identify the presence of offensiveness and a broader set of cultural references, enhancing the understanding of more nuanced aspects of visual communication. In this work, we present a statistical analysis of the annotated dataset and discuss annotation examples and future lines of research
2024
979-8-89176-105-6
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1201336
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact