The paper describes the multimodal enrichment of ItalWordNet action verbs’ entries by means of an automatic mapping with aconceptual ontology of action types instantiated by video scenes (ImagAct). The two resources present significative differences as wellas interesting complementary features, such that a mapping of these two resources can lead to a an enrichment of IWN, through theconnection between synsets and videos apt to illustrate the meaning described by glosses. Here, we describe an approach inspired byontology matching methods for the automatic mapping of ImagAct video scenes onto ItalWordNet. The experiments described in thepaper are conducted on Italian, but the same methodology can be extended to other languages for which WordNets have been created,since ImagAct is available also for English, Chinese and Spanish. This source of multimodal information can be exploited to designsecond language learning tools, as well as for language grounding in action recognition in video sources and potentially for robotics.
From synsets to videos: Enriching ItalWordNet multimodally
Irene De Felice;
2014-01-01
Abstract
The paper describes the multimodal enrichment of ItalWordNet action verbs’ entries by means of an automatic mapping with aconceptual ontology of action types instantiated by video scenes (ImagAct). The two resources present significative differences as wellas interesting complementary features, such that a mapping of these two resources can lead to a an enrichment of IWN, through theconnection between synsets and videos apt to illustrate the meaning described by glosses. Here, we describe an approach inspired byontology matching methods for the automatic mapping of ImagAct video scenes onto ItalWordNet. The experiments described in thepaper are conducted on Italian, but the same methodology can be extended to other languages for which WordNets have been created,since ImagAct is available also for English, Chinese and Spanish. This source of multimodal information can be exploited to designsecond language learning tools, as well as for language grounding in action recognition in video sources and potentially for robotics.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.