: The debate surrounding the integration of artificial intelligence (AI) into scientific writing has already attracted significant interest in medical and life sciences. While AI can undoubtedly expedite the process of manuscript creation and correction, it raises several criticisms. The crossover between AI and health sciences is relatively recent, but the use of AI tools among physicians and other scientists who work in the life sciences is growing very fast. Within this whirlwind, it is becoming essential to realize where we are heading and what the limits are, including an ethical perspective. Modern conversational AIs exhibit a context awareness that enables them to understand and remember any conversation beyond any predefined script. Even more impressively, they can learn and adapt as they engage with a growing volume of human language input. They all share neural networks as background mathematical models and differ from old chatbots for their use of a specific network architecture called transformer model [1]. Some of them exceed 100 terabytes (TB) (e.g., Bloom, LaMDA) or even 500 TB (e.g., Megatron-Turing NLG) of text data, the 4.0 version of ChatGPT (GPT-4) was trained with nearly 45 TB, but stays updated by the internet connection and may integrate with different plugins that enhance its functionality, making it multimodal.

Artificial intelligence in scientific medical writing: Legitimate and deceptive uses and ethical concerns

Ramoni, Davide;Sgura, Cosimo;Liberale, Luca;Montecucco, Fabrizio;Carbone, Federico
2024-01-01

Abstract

: The debate surrounding the integration of artificial intelligence (AI) into scientific writing has already attracted significant interest in medical and life sciences. While AI can undoubtedly expedite the process of manuscript creation and correction, it raises several criticisms. The crossover between AI and health sciences is relatively recent, but the use of AI tools among physicians and other scientists who work in the life sciences is growing very fast. Within this whirlwind, it is becoming essential to realize where we are heading and what the limits are, including an ethical perspective. Modern conversational AIs exhibit a context awareness that enables them to understand and remember any conversation beyond any predefined script. Even more impressively, they can learn and adapt as they engage with a growing volume of human language input. They all share neural networks as background mathematical models and differ from old chatbots for their use of a specific network architecture called transformer model [1]. Some of them exceed 100 terabytes (TB) (e.g., Bloom, LaMDA) or even 500 TB (e.g., Megatron-Turing NLG) of text data, the 4.0 version of ChatGPT (GPT-4) was trained with nearly 45 TB, but stays updated by the internet connection and may integrate with different plugins that enhance its functionality, making it multimodal.
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0953620524002954-main.pdf

accesso aperto

Tipologia: Documento in versione editoriale
Dimensione 352.15 kB
Formato Adobe PDF
352.15 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1204297
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact