Web page of
Tamás Biró
Home
CV
Publications
Software
Talks
Events
Courses
Magyar oldal
Hungarian site
HIGHLIGHTS:
Changing Minds
OTKit
SA-OT demo
ARC
PERSONAL:
Photo album
Further stuff

NWO Veni project

Efficient communication full of errors

Linguistic performance in a virtual speech community

financed by an NWO Veni grant (Netherlands Organisation for Scientific Research, project number 275-89-004).

 

A very short description

In the coming three years I will try to expand my Optimality Theory-based model of linguistic performance (the so-called Simulated Annealing for OT Algorithm ) into a model of communication. Just like in a computer game, virtual agents will "speak" and "listen" to each other, and baby agents will have to acquire their future mother tongue. Yet, just like us, these agents will not speak fully grammatically, but make certain speech errors. So my computer simulation will hopefully contribute to our understanding of the role of performance errors in language comprehension, language learning and language change.

  • Originally appeared in: Katblad 96 (Juni 2009)

 

A longer description for non-specialists

The project focuses on the role of speech errors in communication. Speech errors are forms that the speaker would not consider correct, and yet involuntarily produces them. In my doctoral dissertation, I argued that (at least some of) these errors are made because our brain is willing to trade precision for speed. During slow speech the brain has more time to find the correct word, unlike in fast speech, when the brain has to speed up the production mechanism, resulting in more errors.

The question is how these errors influence language and communication. Namely:

  1. Do they make language understanding more difficult?
  2. Do they result in aless accurate communication? Is the gain in speed rate higher than the rate of misunderstanding, the number of cases when the hearer's interpretation of what has been said is different from the speaker's intention?
  3. Do errors make language learning more difficult?
  4. Do errors cause language change on a historical time scale?

The way I am tackling these questions is by computer simulations. My computer programs include a number of "agents", like characters in a computer game, who "communicate" to each other. Newborn agents are added to this "community of virtual speakers", who first have to learn their mother tongue. Older agents are removed from the "community" after a while. Having created the community, I can observe how they communicate, whether they understand each other's intentions, how well and how fast children learn their mother tongue, whether their mother tongue is exactly the same as their parents' language, and how does the language in the community develop on a longer time scale.

Although I am not creating nice pictures to represent each member of the community, I still need to equip them with four tools: the knowledge of the language (called in jargon "linguistic competence"), the language production mechanism ("linguistic performance"), the language comprehension mechanism, as well as the language learning mechanism. The knowledge is represented using Optimality Theory , a popular contemporary linguistic approach. The language production mechanism will be the Simulated Annealing for Optimality Theory Algorithm (SA-OT), developed in my doctoral dissertation. Yet, in order to also mirror the last two mechanisms, new algorithms have to be developed as part of the research project.

These algorithms can be seen as the software running in our head. If the simulated community reflects what is going on in communities of real people, if the production algorithm makes thesame errors as human speakers, if the interpretation algorithm behaves similarly to human listeners, if the learning algorithm imitates child language development, then we can argue that we have successfully uncovered the software in our brain.