The paper describes the multimodal enrichment of ItalWordNet action verbs’ entries by means of an automatic mapping with aconceptual ontology of action types instantiated by video scenes (ImagAct). The two resources present significative differences as wellas interesting complementary features, such that a mapping of these two resources can lead to a an enrichment of IWN, through theconnection between synsets and videos apt to illustrate the meaning described by glosses. Here, we describe an approach inspired byontology matching methods for the automatic mapping of ImagAct video scenes onto ItalWordNet. The experiments described in thepaper are conducted on Italian, but the same methodology can be extended to other languages for which WordNets have been created,since ImagAct is available also for English, Chinese and Spanish. This source of multimodal information can be exploited to designsecond language learning tools, as well as for language grounding in action recognition in video sources and potentially for robotics.

From synsets to videos: Enriching ItalWordNet multimodally

Irene De Felice;
2014-01-01

Abstract

The paper describes the multimodal enrichment of ItalWordNet action verbs’ entries by means of an automatic mapping with aconceptual ontology of action types instantiated by video scenes (ImagAct). The two resources present significative differences as wellas interesting complementary features, such that a mapping of these two resources can lead to a an enrichment of IWN, through theconnection between synsets and videos apt to illustrate the meaning described by glosses. Here, we describe an approach inspired byontology matching methods for the automatic mapping of ImagAct video scenes onto ItalWordNet. The experiments described in thepaper are conducted on Italian, but the same methodology can be extended to other languages for which WordNets have been created,since ImagAct is available also for English, Chinese and Spanish. This source of multimodal information can be exploited to designsecond language learning tools, as well as for language grounding in action recognition in video sources and potentially for robotics.
2014
978-2-9517408-8-4
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1122395
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 1
social impact