We explore how human and nonhuman intelligence entangle in medical work with A.I. By means of a cross-sectional experimental online survey, we study people's attitudes towards human and nonhuman intelligence and how they change when the technology of clinical process is all human (traditional in person face-to-face assessment); all nonhuman (A.I. is the only clinical application); and both human and nonhuman (A.I. clinical application is monitored and controlled by a doctor). The theoretical framework of the study combines Herbert Simon’s posthuman notion of artificial intelligence as human or nonhuman ‘complex information processing’ with a process-oriented view of technology as technical rationality. The preliminary results of the pilot study provide new empirical material to recent literature on ‘algorithm aversion’ in medical diagnosis (Esmaeilzadeh et al., 2021; Juravle et al., 2020) by both showing the role of human and nonhuman intelligence in it and highlighting the buffering role of human and nonhuman encounters in the posthuman process of medical work. The article seeks to extend existing literature in three ways: first, by offering a posthuman perspective on the organizational implication of A.I. that avoids the anthropocentric assumptions that human agency and A.I. are ontologically different and should be conceptualized separately; second, by providing an empirical account of how human and nonhuman intelligence encounters in medical work with A.I.; third, by extending the processual understanding of how these encounters affect ‘algorithm aversion’ adding more nuances to the interpretation of the nature of this phenomenon.
Posthuman intelligence in healthcare organizations. Exploring human and nonhuman technological encounters in medical work with A.I.
A. Gasparre;
2024-01-01
Abstract
We explore how human and nonhuman intelligence entangle in medical work with A.I. By means of a cross-sectional experimental online survey, we study people's attitudes towards human and nonhuman intelligence and how they change when the technology of clinical process is all human (traditional in person face-to-face assessment); all nonhuman (A.I. is the only clinical application); and both human and nonhuman (A.I. clinical application is monitored and controlled by a doctor). The theoretical framework of the study combines Herbert Simon’s posthuman notion of artificial intelligence as human or nonhuman ‘complex information processing’ with a process-oriented view of technology as technical rationality. The preliminary results of the pilot study provide new empirical material to recent literature on ‘algorithm aversion’ in medical diagnosis (Esmaeilzadeh et al., 2021; Juravle et al., 2020) by both showing the role of human and nonhuman intelligence in it and highlighting the buffering role of human and nonhuman encounters in the posthuman process of medical work. The article seeks to extend existing literature in three ways: first, by offering a posthuman perspective on the organizational implication of A.I. that avoids the anthropocentric assumptions that human agency and A.I. are ontologically different and should be conceptualized separately; second, by providing an empirical account of how human and nonhuman intelligence encounters in medical work with A.I.; third, by extending the processual understanding of how these encounters affect ‘algorithm aversion’ adding more nuances to the interpretation of the nature of this phenomenon.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.