The web has become a fundamental tool for carrying out many activities spanning from education to work and private life. For this reason, it must be accessible to every user regardless of any form of impairment or disability. Images on the web are a primary means for communicating information, and specific HTML elements were defined to enrich images with textual descriptions, which can be read aloud by screen readers or rendered by braille displays. A relevant problem is that adding a text describing each image published on a website is a demanding task requiring a non-negligible effort for web developers. Several tools based on machine learning have emerged, which can automatically return descriptions for the images. In this work, we evaluate the correctness of their outputs by comparing the generated descriptions with human-defined references. More specifically, we selected 60 images from Wikipedia and their corresponding descriptions as defined by Wikipedia contributors. We then generated the corresponding descriptions employing four state of the art tools (Azure Computer Vision Engine, Amazon Rekognition, Cloudsight, and Auto Alt-Text for Google Chrome) and asked 76 computer science students to blindly evaluate the perceived correctness of the descriptions without being aware of their source. The results show that the descriptions available in Wikipedia are still perceived as the best ones. However, some tools generate good results for specific categories of images, and they can represent proper candidates for the automated and massive addition of image descriptions to websites, helping to increase the accessibility level of the web drastically.

Evaluating the effectiveness of automatic image captioning for web accessibility

Leotta, Maurizio;Mori, Fabrizio;Ribaudo, Marina
2022-01-01

Abstract

The web has become a fundamental tool for carrying out many activities spanning from education to work and private life. For this reason, it must be accessible to every user regardless of any form of impairment or disability. Images on the web are a primary means for communicating information, and specific HTML elements were defined to enrich images with textual descriptions, which can be read aloud by screen readers or rendered by braille displays. A relevant problem is that adding a text describing each image published on a website is a demanding task requiring a non-negligible effort for web developers. Several tools based on machine learning have emerged, which can automatically return descriptions for the images. In this work, we evaluate the correctness of their outputs by comparing the generated descriptions with human-defined references. More specifically, we selected 60 images from Wikipedia and their corresponding descriptions as defined by Wikipedia contributors. We then generated the corresponding descriptions employing four state of the art tools (Azure Computer Vision Engine, Amazon Rekognition, Cloudsight, and Auto Alt-Text for Google Chrome) and asked 76 computer science students to blindly evaluate the perceived correctness of the descriptions without being aware of their source. The results show that the descriptions available in Wikipedia are still perceived as the best ones. However, some tools generate good results for specific categories of images, and they can represent proper candidates for the automated and massive addition of image descriptions to websites, helping to increase the accessibility level of the web drastically.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1099898
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact