University of Central Florida Undergraduate Research Journal - Cross-Modal Distraction on Simultaneous Translation: Language Interference in Spanish-English Bilinguals
US tab

Introduction

Bilingualism is a popular area of study in neuroscience, the results of which better inform our understanding of the psychology of language. But how does bilingual cognition shape our mental processing? The existence of a “bilingual advantage” is widely debated. If such an advantage exits, benefits of learning a second language could push educational curriculum to adopt language acquisition courses earlier on. This change would also promote cultural inclusion and inter-societal exchange, diminishing language barriers and biases.

Literature Review

The Stroop task (1935) is one method for testing how bilingual persons process words in both languages. In a Stroop task participants are shown words in colored ink and are asked to say the color of the word. Either the word matches the colored ink name, e.g. “blue” written in blue (termed congruent) or does not match, e.g. “blue” written in red ink (incongruent). The Stroop task can be modified to test bilinguals by switching the language of the word or ink. This competing urge to say the word rather than color is called “interference”. We are taught to read words, and the color they are printed in is merely a detail. With this task, participants are asked to do the opposite: to suppress the innate processing of reading the word. Interference is a competition of processing between two sets of information. For example, when you are in a crowded restaurant, you must selectively tune out other conversations to hear the server or the person you are with. In the context of a bilingual Stroop task, the incongruent condition produces interlingual interference while the congruent condition produces intralingual interference. Intralingual means within one language, while interlingual means between two languages. There is a substantial amount of research applying the Stroop task to bilinguals, the earliest dating from the 1960’s.

The crucial part of these experiments is the measurement of interlingual and intralingual interference, quantified by comparing reaction times in the various conditions. The consensus is that intralingual interference is higher than interlingual interference (Dyer, 1970; MacLeod, 1991; Preston, 1965). The reason is explained by Brauer (1998):

...bilinguals store words of different languages in different language dictionaries. When only one language is involved, the stimulus is highly compatible with the response and can exert more interference than in the between-language conditions, in which the interference has to spread from one dictionary to another. (318).

Intra/interlingual interference is multi-factorial, with the most significant factor being language proficiency. Mägiste (1984) found that the dominant language creates a higher level of intralingual interference than interlingual, implying that your ability to filter out the second language becomes harder as you become more proficient. Similarity of the competing language plays a role as well, with closely-related languages, i.e. English and Spanish (sharing the same alphabet) would create higher interference than Arabic and English (Chen & Ho, 1986). These bilingual Stroop results have been a pivotal part of investigating language processing.

Another method of producing language interference in bilinguals is the dichotic listening task. A dichotic listening task measures how auditory language stimuli interferes with verbal language by having bilinguals translate simultaneously. Subjects are given a pair of headphones in which the stimulus is played in one ear, either with silence (control) or with distractor words playing in the other. The goal is to selectively focus on the ear (indicated by the researcher) that plays the target words. Bilinguals however, must simultaneously translate the word with an English or Spanish distractor word playing in the other ear (or silence for control). For example, a participant is instructed to only repeat what they hear in the left ear and to ignore the right ear, responding in English. The left headphone says “bar” and the right says “fish”, the correct response being “bar”. Choosing phonetically similar words would increase difficulty (e.g., “bar” and “car”). This choice would cause intralingual interference because all words are in English. Any error in response would be due to the distraction of English coming from the right ear. Interlingual interference would occur upon introduction of another language.  

Soveri, Laine, Hämäläinen, and Hugdahl (2011) found that bilinguals are better at filtering out irrelevant stimuli than monolinguals, as they can suppress the unused language when speaking. Bilinguals only channel one language and can easily ignore the inactivated language. The bilingual advantage is described as better performance in executive tasks (e.g., dichotic listening) due to a higher level of cognitive functioning. This advantage is explained by Desjardins and Fernandez (2017): “The regular use of two languages requires that bilinguals control their attention and select the target language, which, in turn, is reflected in greater cognitive control on tasks with distracting information”. The term “regular use” in that conclusion points out a limitation in the finding: bilinguals with less regular use of both languages might not have the same level, if at all, of greater cognitive control. There is no consensus on whether a bilingual advantage exists.

Few researchers have looked at the bilingual dichotic auditory task through the lens of inter/intralingual interference. Two papers analyzing the bilingual dichotic listening task in terms of interlingual and intralingual interference include Edith Mägiste (1984) and Everdina Lawson (1967). Lawson found that no switching of attention to the distractor stimuli occurred during the experiment, due to the high mental load of the task, leading to subjects being unaffected by the distractor stimuli. Fewer errors were made when the language of the distractor channel was the same as the target language of the translation. This result implies that distracting language has some effect on accuracy of translations; otherwise the level of errors would remain constant through all trials. Lawson also suggested this study be reproduced with more subjects, as her sample size was only six educated males.

Mägiste performed two experiments: a bilingual Stroop task and a bilingual dichotic listening task. In the listening task, intralingual interference was higher than interlingual, but not as high as in the Stroop task. The results also showed that higher fluency in the language allowed subjects to ignore the distractor stimuli, the same result that Soveri observed in her dichotic listening task (Mägiste, 1984; Soveri et al., 2011). A monolingual group illustrates differences between groups and can confirm or deny a bilingual advantage. Neither Lawson nor Mägiste did randomized which ear the participant translated from. Participants exclusively translated either the left ear or right ear without switching during the experiment. This method can easily lead to better performance due to practice, or right/left ear advantage. In addition, both researchers only measured responses in terms of errors. By contrast, I randomized the translated ear within subjects, including right, left, and both-ear stimuli. I also measured data based on accuracy (errors) and reaction times in milliseconds, recorded from a serial-response box. Though Mägiste’s procedure used sentences for translation and, Lawson used passages from a book for translations, I used a one-word setup to limit extraneous variables affecting reading comprehension.

The proposed experimental setup of this research would be a novel way of evaluating language interference in bilinguals and a new addition into the scope of bilingual psycholinguistics. The cross-modal setup, chosen based on simplicity and novelty to the research discourse, acts as a combination of the Stroop and dichotic listening tasks. Presenting a visual target word on the screen and an auditory distraction word in the headphones was a setup based on existing literature suggesting that background speech or vocal music has a negative effect on cognitive performance in tasks with visual verbal material (Cauchard, Cane & Weger, 2012; Hughes et al., 2011; Pool, Koolstra, & van der Voort, 2003; Salamé & Baddeley, 1989). In summary, the process of speech gains access to the short-term storage of the phonological aspects of visual information, allowing the distractor speech to cause interference in the cognitive task (Salamé & Baddeley, 1989). Pool et al. (2003) describe this effect as a result of limited resources; the dual information might breach the capacity of cognitive resources, leading to only one source being processed (limited-capacity theory). Competition of dual modalities (visual and auditory) for resources leads to decreased performance in working memory cognitive tasks. As the proposed task contains visual information along with distracting auditory information, the distractor speech may negatively affect the subject’s performance.

Experimental tasks that include words are highly susceptible to the frequency effect, defined as the recognition of higher frequency words (more common words) more easily or quickly than that of low frequency words (Howes & Solomon cited in Harley, 2001, p.158). The more you use or see a word, the more common it will become in your vocabulary, increasing its frequency and thus leading to faster recognition and retrieval. The age at which you first learn the word (age-of-acquisition) determines frequency level as well, with words learned earlier in life being recognized faster than those learned later in life (Harley, 2001, p. 158). Basic words are learned first and are thus used more regularly and for a longer period of time than specialized language gained later in life (i.e., contemporary). More common words are also shorter and take less time to say than longer words (Harley, 2001, p. 160), meaning that reaction times could possibly be faster for shorter words, with frequency affecting this as well. Recognition is faster with low frequency words that have a large neighborhood (Andrews, 1989; Grainger, 1990; McCann & Besner, 1987 cited in Harley, 2001, p.160). Neighbor words are phonetically similar and have one or two letter differences (i.e., dog and fog). A trial with visual and auditory neighbor words would create the most interference because the word would activate similar dictionaries and compete for processing. Word frequency, length, and phonetics were thus evaluated as independent variables in my data analysis.

The primary purpose of this study is to test the extent of inter and intralingual interference in a cross-modal audio-visual simultaneous translation task in Spanish-English bilinguals. The secondary purpose is to determine if a bilingual advantage occurs in this task. A bilingual advantage comes with uncertainty, as it is observed in some experimental settings but not others. I will test the following hypotheses:

Hypothesis 1: Bilinguals will produce less interference than monolinguals.

Hypothesis 2: Less proficient bilinguals will produce more interference than higher proficient bilinguals.

Hypothesis 3: Bilinguals will produce more intralingual interference.

Hypothesis 4: Phonetically similar words produce more interference.

Hypothesis 5: Frequency effect will be observed in both groups.

 

Methods >>