Language and other higher-cognitive functions require structured sequential behavior including non-adjacent relations. A fundamental question in cognitive science is what computational machinery can support both the learning and representation of such non-adjacencies, and what properties of the input facilitate such processes. Learning experiments using miniature languages with adult and infants have demonstrated the impact of high variability (Gómez, 2003) as well as nil variability (Onnis, Christiansen, Chater, & Gómez (2003; submitted) of intermediate elements on the learning of nonadjacent dependencies. Intriguingly, current associative measures cannot explain this U-shaped curve. In this chapter, extensive computer simulations using five different connectionist architectures reveal that Simple Recurrent Networks (SRN) best capture the behavioral data, by superimposing local and distant information over their internal ‘mental’states. These results provide the first mechanistic account of implicit associative learning of non-adjacent dependencies modulated by distributional properties of the input. We conclude that implicit statistical learning might be more powerful than previously anticipated. Most routine actions that we perform daily such as preparing to go to work, making a cup of coffee, calling up a friend, or speaking are performed without apparent effort and yet all involve very complex sequential behavior. Perhaps the most apparent example of sequential behavior–one that we tirelessly perform since we were children–involves speaking and listening to our fellow humans. Given the relative ease with which children acquire these …

Implicit learning of non-adjacent dependencies

Onnis L;
2015-01-01

Abstract

Language and other higher-cognitive functions require structured sequential behavior including non-adjacent relations. A fundamental question in cognitive science is what computational machinery can support both the learning and representation of such non-adjacencies, and what properties of the input facilitate such processes. Learning experiments using miniature languages with adult and infants have demonstrated the impact of high variability (Gómez, 2003) as well as nil variability (Onnis, Christiansen, Chater, & Gómez (2003; submitted) of intermediate elements on the learning of nonadjacent dependencies. Intriguingly, current associative measures cannot explain this U-shaped curve. In this chapter, extensive computer simulations using five different connectionist architectures reveal that Simple Recurrent Networks (SRN) best capture the behavioral data, by superimposing local and distant information over their internal ‘mental’states. These results provide the first mechanistic account of implicit associative learning of non-adjacent dependencies modulated by distributional properties of the input. We conclude that implicit statistical learning might be more powerful than previously anticipated. Most routine actions that we perform daily such as preparing to go to work, making a cup of coffee, calling up a friend, or speaking are performed without apparent effort and yet all involve very complex sequential behavior. Perhaps the most apparent example of sequential behavior–one that we tirelessly perform since we were children–involves speaking and listening to our fellow humans. Given the relative ease with which children acquire these …
File in questo prodotto:
File Dimensione Formato  
Onnis et al. in -Implicit and Explicit Learning of Languages-John Benjamins Publishing Company (2015).pdf

accesso chiuso

Tipologia: Documento in Post-print
Dimensione 751.82 kB
Formato Adobe PDF
751.82 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/983753
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact