The Study of Prelexical and Lexical DENNIS NORRIS AND RICHARD WISE


The Study of Prelexical and Lexical

Processes in Comprehension:

Psycholinguistics and

Functional Neuroimaging

DENNIS NORRIS AND RICHARD WISE

ABSTRACT Here we review the functional neuroimaging literature

relating to prelexical auditory and visual processes. We

relate neuroimaging work to current psychological models of

word perception and discuss some of the problems inherent in

the use of the standard subtractive method in this area. The

signal returned by cortex associated with speech perception is

large, which makes the techniques sensitive to the study of

prelexical processes. The major regions involved are primary

and association auditory and visual cortices of both hemispheres.

The results of the neuroimaging work are shown to be

consistent with other studies using ERP and MEG.

Functional neuroimaging of prelexical

and lexical processes: Introduction

Over the past decade, a number of functional neuroimaging

papers have been published on language activation

studies. Studies with positron emission tomography

(PET) have predominated, although the number of

functional magnetic resonance imaging (fMRI) studies

is increasing. Both techniques rely on the rise in regional

cerebral blood flow (rCBF) that accompanies a

net increase in local synaptic activity. PET activation

studies are based on the accumulation of regional tissue

counts after the intravenous bolus infusion of radiolabeled

water (H2

15O) (Mazziotta et al., 1985). The signal

in the most commonly used fMRI technique, the BOLD

(blood oxygenation level-dependent) image contrast,

originates from an increase in the oxyhemoglobin:deoxyhemoglobin

ratio on the venous side of the local intravascular

compartment of the tissue being sampled; a

transient increase in local synaptic activity is associated

with an increase in rCBF in excess of the rise in oxygen

consumption, with greater oxygen saturation of venous

blood (Thulborn, 1998). Emphasizing the source of the

signal in functional neuroimaging acknowledges one of

the two major limitations of these techniques in the

study of prelexical (and other) processes: Changes in

nutrient blood flow occur over many hundreds of milliseconds

whereas many of the underlying electrochemical

events are complete in tens of milliseconds. The

limited temporal resolution of functional neuroimaging

(thousands of milliseconds with fMRI; and, with PET,

neural transients have to be summed over 15-30 seconds)

is also confounded by limited spatial resolution.

Even the theoretical resolving power of MRI (1-2 mm)

may be misleading, as the signal comes from the intravascular

compartment, possibly a little distant from the

local neural system under investigation. The signal from

PET, the tissue concentration of H2

15O, more directly

signals where events are occurring, but the physics associated

with this technique, and the smoothing required

in image analysis, means that it is difficult to resolve separate

peaks of activation that are less than 5 mm apart.

It is possible to overcome the problem of temporal resolution

when studying some functional systems, such as

visual attention, by combining neuroimaging and electrophysiological

techniques to relate a component of an

event-related potential to an activated region on a PET/

fMRI image (Heinze et al., 1994). The spatial resolution

results in an “activation” that is the net change in activity

of many millions of synapses.

Using modern PET cameras it is possible to do a 12-

16 scan activation study in under two hours with a radiation

exposure acceptable to radiation advisory committees.

An fMRI study of comparable duration, with no

exposure to ionizing radiation, typically allows ten times

the number of measurements, although fMRI has its

DENNIS NORRIS Medical Research Council Cognition and

Brain Sciences Unit, Cambridge

RICHARD WISE Medical Research Council Cyclotron Unit

and Imperial College School of Medicine, Hammersmith Hospital,

London

868 LANGUAGE

own problems: relatively low sensitivity, susceptibility to

movement artifacts because of the higher spatial resolution,

and the sheer volume of data that is acquired.

Although the limitations of functional neuroimaging

restrict the ability to address many issues of interest to

psychologists and psycholinguists, relating behavioral

observations to human brain structure and physiology is

one of the more important bridges to cross in cognitive

neuroscience. This chapter reviews currently available

data on sublexical and lexical processing of both spoken

and written language and considers the success (or otherwise)

of functional neuroimaging and electrophysiological

studies when addressing psychological theories

of prelexical processing. The brain regions involved are

the primary and association auditory and visual cortices.

One of the challenges in such studies has been to demonstrate

lateralized activations, as it soon became evident

from the earliest PET studies that the perception of

heard and seen words produced very symmetrical activations

in auditory and visual cortices, respectively, in

terms of both peak and extent.

The subtractive method and language

The standard functional neuroimaging paradigm is the

subtractive method. Ideally, two tasks are chosen which

differ only in their demands on a single process. Subtraction

of images obtained during the performance of

both tasks identifies the brain area(s) responsible for that

process. Effective application of this method needs a

good cognitive model of the processes under study and

a detailed analysis of the tasks. An a priori hypothesis

about how the cognitive model might be implemented

neurally will also considerably enhance the interpretation

of the results.

Even if all of these prerequisites were met, any attempt

to apply the subtractive method to language processing

to isolate a specific stage of linguistic processing

faces considerable technical and theoretical problems.

The early stages of language processing are highly automatic

and overlearned, so it is difficult or impossible to

devise tasks that make listeners or readers process input

to one level and no further. Consider the problem of trying

to force spoken input to be processed up to, but not

including the lexical level. The obvious comparison

here would be between words and nonwords. However,

all current theories of spoken word recognition assume

that nonwords will activate a number of partially matching

candidate words to some level. The best we can do is

hope that nonwords produce less lexical activation than

words. To compound the problem, once input is processed

to the lexical level, it is likely also to be processed

semantically and possibly even interpretively. So, words

will also activate semantic areas while nonwords will activate

lexical areas (at least). Unfortunately, subtracting

word and nonword processing is not going to have the

desired effect of isolating areas responsible for a specifically

lexical level of processing. This may explain why

at least two PET studies have failed to find any differences

between words and nonlexical stimuli. Hirano

and colleagues (1997) compared normal Japanese sentences

with the same sentences played in reverse. Fiez

and colleagues (1996) compared words with nonwords.

Neither study found differences.

Models

The central focus of cognitive models of both spoken

and written word recognition has been the lexical access

process itself. Theories like the interactive activation

models of McClelland and Rumelhart (1981) and

Grainger and Jacobs (1996) in the visual domain, and

TRACE (McClelland and Elman, 1986), Shortlist (Norris,

1994), and Cohort (Marslen-Wilson and Welsh, 1978)

in the spoken domain have all concentrated on explaining

how orthographic or phonological representations

make contact with lexical representations. These models

differ in important respects (for example, TRACE is interactive

while Shortlist is bottom-up); however, with

the exception of the Cohort model, each of these theories

assumes that lexical access involves a process of

competition between simultaneously activated lexical

candidates. Visual or spoken input results in the activation

of a number of matching, or partially matching, lexical

candidates that compete with each other by means

of lateral inhibition until a single winning candidate

emerges. In the case of spoken input, the competition

process also performs the essential task of parsing continuous

input (in which word boundaries are generally

not marked) into a sequence of words. The principle of

competition now has extensive empirical support in the

case of both spoken (e.g., McQueen, Norris, and Cutler,

1994) and visual input (e.g., Andrews, 1989; Forster and

Taft, 1994). So, there is widespread agreement among

models in terms of the broad characterization of lexical

access. There is much less of a consensus about prelexical

processing. In reading there is unanimity over the

importance of letters in prelexical processing, but much

less certainty about the nature of any intermediate orthographic

representations (e.g., Raap, 1992; Perea and

Carreiras, 1998, on syllables) or whether phonological

representations play a role in lexical access (e.g., van Orden,

1987; Lesch and Pollatsek, 1998).

In speech, most models largely follow a standard linguistic

hierarchy and have stages of acoustic, phonetic,

phonemic, and phonological analysis, although not all

NORRIS AND WISE: PRELEXICAL PROCESSES 869

models have all stages. For example, TRACE adopts a

very conventional linguistic approach with levels corresponding

to features and phonemes. However, TRACE

has no phonological representation of metrical structure

such as mora, syllable, or foot. Shortlist (Norris et al.,

1997) assumes that metrical information must be available,

but it too does not specify an explicit prelexical

stage of phonological processing. In fact, although

Shortlist accesses the lexicon via phonemic representations,

this is really a matter of implementational convenience

rather than a result of a commitment to a

phonemic level of representation.

The existence of a strictly phonemic level of processing

has been questioned in both the linguistic and the psycholinguistic

literature. Some linguistic frameworks, such

as underspecification theory (cf. Archangeli, 1984; Kiparski,

1985; Pulleyblank, 1983), have no role for the a phonemic

level of representation. In psychology, Lahiri and

Marslen-Wilson (1991) have argued that the work usually

attributed to a phonemic level can be accomplished by a

level of featural representation instead. Marslen-Wilson

and Warren (1994) have argued that phonemic and phonological

representations are constructed postlexically

(but see Norris, McQueen, and Cutler, in press). Other

authors have argued that the syllable is the most important

prelexical level of representation (Mehler, 1981) and

that phonemes play only a secondary role. It should be

clear from this that psycholinguists cannot yet offer a definitive

cognitive account of prelexical processes and representations.

Indeed, determining exactly what those

processes and representations are is one of the central

goals of current psycholinguistic research.

Implementation

Even when we seem to be asking very simple questions

about large-scale architectural issues, questions of implementation

can significantly alter the kind of conclusions

we might draw from an imaging study. Consider the

problem of identifying areas responsible for auditory

and “phonological” processing. In the imaging literature

this has been approached by comparing speech (in either

active or passive listening tasks) with “nonspeech”

stimuli such as tones (Demonet et al., 1992, 1994; Binder

et al., 1996), noise bursts (Zatorre et al., 1992), or signalcorrelated

noise (Mummery et al., in press). The assumption

behind these studies is that the nonspeech

stimuli will not activate the areas responsible for phonological

processing. But both “nonspeech” and speech

should be fully processed by acoustic areas. The output

from these areas must then be passed on to the “phonological”

areas. Unless the auditory areas are designed to

prevent nonspeech signals from being passed on to the

phonological system, the phonological system will receive

at least some input. Intuitively, of course, it seems

that speech should engage the phonological areas much

more than nonspeech. But this needn't be the case, certainly

not for the early stages of phonological or phonetic

processing. It is only once some part of the speech

processing system has tried, and then failed, to categorize

the input into a form appropriate for further speech

analysis that subsequent areas will not receive an input.

As we all know, trying to do something we can't do can

be much harder than doing something we can do. We

can see this kind of problem in a very extreme form in

the Auditory Image Model proposed by Patterson, Allerhand,

and Giguere (1995). In this model of early auditory

processing there is a component designed to deal

with periodic signals. When given periodic signals, it

produces a clean stabilized image of the input, revealing

the fine structure of the periodic signal. With aperiodic

signals, this stage produces noise. Depending on the details

of the neural implementation, this component

could do less work when analyzing the periodic signals it

is specialized for than when attempting to analyze aperiodic

signals. However, we should bear in mind that it is

not at all clear how a hemodynamic response on a scan

relates to apparent task “difficulty.” Furthermore, even

connectionist models are not attempts to faithfully capture

the architecture of inhibitory and excitatory neural

subsystems. For example, inhibitory and excitatory synapses

both consume energy, and a “deactivation” (reduction

in local blood flow) on an image reflects a net

reduction of synaptic activity in a region with many

polysynaptic pathways.

At least some of these issues might be profitably approached

by correlational methods. Instead of contrasting

speech and nonspeech, one could vary the strength of

a particular speech property (while keeping the signal relatively

unchanged acoustically). Therefore, a subject may

be required to listen to acoustic or visual signals that vary

along one of a number of dimensions, and the changing

response of a brain region correlated with this varying input.

This technique is being applied in terms of the rate of

presentation of stimuli, or by using psychological variables

such as word imagability or frequency, and the same

strategy can be used for the physical properties of an input

signal. These techniques can explore the natural processing

of the stimuli without the need to make A - B subtractions

between two more or less metalinguistic tasks.

Tasks

The processing of heard words is a function of primary

and association auditory cortex; and similarly, seen words

activate striate and prestriate cortex (figure 60.1A,B). One

870

NORRIS AND WISE: PRELEXICAL PROCESSES 871

noticeable feature is that they return a strong signal, even

when the stimuli are “passively” perceived. Although

many neuroimaging studies to date have been concerned

with lexical-semantic processes, the signal obtained in

ventral temporal regions is smaller, both in terms of extent

and peak activity (figure 60.1C). Therefore, anatomically

constrained, strongly activated prelexical systems

are potentially easier to study with functional neuroimaging

than lexical-semantic and syntactic language systems.

Imaging studies have generally adopted standard psycholinguistic

tasks. For example, a popular task in imaging

studies has been phoneme monitoring, in which

listeners are required to press a button when they hear a

particular phoneme in the input. This task is employed

as a way of engaging “phonological1” processing, and

has been compared with passive listening (Zatorre et al.,

1992, 1996) or monitoring for changes in the pitch of

pure tones (Demonet et al., 1992, 1994). However, from

a psycholinguistic standpoint, the most significant observation

about the phoneme monitoring task is that it can

be performed only by listeners who have been taught to

read an alphabetic script (Read et al., 1986). Illiterates,

for example, are unable to perform phoneme monitoring,

or most other tasks involving explicit segmentation.

In other words, phoneme monitoring makes cognitive

demands over and above those required by normal

speech perception. This fact has long been recognized

by psycholinguists and is an important feature of the

most recent psychological model of phoneme monitoring

and phonetic judgments (Norris, McQueen, and

Cutler, in press).

Interpretation of phoneme monitoring studies therefore

has to be tempered with the possibility that the results

may tell us as much about the structures involved

in performing a particular metalinguistic task as they do

about speech perception itself. However, it can still be a

valuable cognitive task as, in almost all psycholinguistic

accounts, phoneme monitoring is assumed to tap into

the products of the normal speech recognition process at

some level. However, the use of phoneme monitoring to

tap into speech processing is logically very different

from its use in an imaging study if the intention is that

the task should engage normal phonemic/phonetic processing.

If we find that a particular brain area activates

only when performing an explicit metalinguistic task,

like phoneme monitoring, we have no evidence that this

area is directly involved in the normal phonetic or phonological

processing of speech. The area could be responsible

solely for interrogating the normal speech

recognition systems in order to generate a response in

this particular task.

Interestingly, much of the data showing specifically lefthemisphere

activation comes from a comparison of metalinguistic,

or active, tasks, with passive listening (Zatorre

et al., 1992; Demonet et al., 1994). There tends to be more

left-hemisphere activation with active listening. A similar

pattern also emerges in a MEG study (Poeppel et al.,

1996) where active discrimination of a voicing contrast

(/bć/ and /dć/ vs. /pć/ and /tć/) led to an increase in

M100 amplitude in the left hemisphere and a decrease in

the right as compared to a passive listening condition.

Note that one interpretation of the imaging data on

phoneme monitoring and other active listening tasks is

suggested by Fiez and colleagues (1995). Possibly, the increased

attentional demands of these tasks lead to increased

activation in normal speech processing areas

relative to passive listening tasks. This would be consistent

with ERP and MEG studies of auditory processing

that have found increased activation in the auditory areas

contralateral to the attended ear (Näätänen, 1990;

Woldorff, Hackley, and Hillyard, 1991; Woldorff and

Hillyard, 1991; Woldorff et al., 1993). However, any

cognitive model still needs to account for the behavior

of illiterates and assume that there is some process responsible

for the metalinguistic phonemic judgment,

which should presumably result in brain activation itself.

We can see some evidence of activation of other

brain areas involved in phoneme monitoring in the

studies by Zatorre and colleagues (1992, 1996), who

found activation of visual cortex, and by Demonet and

FIGURE 60.1 The first three columns show the orthogonal

projections of the brain created by the image data analysis program

(SPM96—Wellcome Department of Cognitive Neurology)

(Friston et al., 1995a,b): Left = axial; middle = saggital;

right = coronal. The orientation (left/right/anterior) is shown

on the bottom row of images. In the fourth column are activations

displayed on selected slices of the MRI template available

in SPM96. The threshold was set at p < .05, corrected for

analysis of the whole brain volume. (A) Twelve normal subjects

listening to single words contrasted with seeing the same

words. There are bilateral, symmetrical, extensive DLTC activations.

In the coronal MRI slice, the activations are seen to

run mediolaterally along Heschl's gyrus. (B) As (A), but now

seeing single words has been contrasted with hearing words.

There are bilateral, symmetrical, extensive posterior striate/

prestriate activations. In the axial MRI slice, the activations

are seen to extend toward the occipitotemporal junction. As

only foveal vision is used to read singe words, striate cortex

subserving parafoveal and peripheral retinal vision has not

been activated. (C) Three experiments with six normal subjects

in each (eighteen subjects in all), in which seen and heard

word imagability was varied. Words of higher imagability produced

greater activity in left ventral temporal cortex (the fusiform

gyrus). Both the peak and extent of this imagability effect

were much smaller than observed in (A) and (B). It is a feature

of all functional imaging experiments on lexical semantic processes,

in a temporal lobe region thought to be a major site for

the representations of semantic knowledge, that the signal is

small relative to prelexical processes.

872 LANGUAGE

colleagues (1994), who found activation of the left fusiform

gyrus. Possibly, this is related to the fact that phoneme

monitoring is known to be influenced by

orthographic factors (Dijkstra, Roeloffs, and Fieuws,

1995; see also Donnenwerth-Nolan, Tanenhaus, and

Seidenberg, 1981; Seidenberg and Tanenhaus, 1979).

However, perhaps the most worrying feature of studies

comparing active and passive listening tasks is that passive

listening is more than likely to involve completely

normal phonetic/phonemic processing. What these

studies may well have done is to design tasks that factor

out normal speech processing and highlight the brain areas

involved specifically in the metalinguistic tasks. Indeed,

Zatorre and co-workers (1996) acknowledge that

passive listening would engage an important automatic

component of phonetic processing and may involve essentially

full semantic processing.

Acoustic-phonetic and phonemic processes

It is hard to know exactly where, if at all, to place a

boundary between acoustic and phonetic processing.

One could define phonetic processing as being concerned

with extraction of specifically linguistic features

such as place and manner of articulation of consonants.

Acoustic processing would then be defined as those

characteristics, such as loudness and frequency, that are

not of direct linguistic significance. Architecturally, however,

there is no a priori reason why a particular phonetic

feature should not computed by the same brain

areas responsible for acoustic analysis rather than some

later, purely linguistic, stage. Indeed, many animal studies

show that primary auditory cortex is sensitive to

complex acoustic features that, in humans, might well be

considered to be phonetic. A great deal of work has

shown that the primary auditory cortex of a number of

different species produces a change in response at voice

onset times analogous to the human category boundary

between voiced and unvoiced consonants (e.g., Eggermont,

1995; Sinex, McDonald, and Mott, 1991; Steinschneider

et al., 1995). Recently, Ohl and Scheich (1997)

have shown that the primary auditory cortex of gerbils is

organized in a manner that is sensitive to the difference

between the first and second formant frequencies of

vowels, an important factor in human classification of

vowels (Peterson and Barney, 1952). In a nonlinguistic

species, presumably, these features must be acoustic,

and not phonetic. In humans, too, we should probably

not be surprised to find such features processed by auditory

cortex rather than some later, specifically linguistic,

stage of phonetic or phonemic processing. As we will

see later, the idea that much phonetically significant processing

takes place in primary auditory cortex also receives

support from many human studies, especially

those using ERP and MEG.

Although studies using PET and fMRI have been directed

at identifying particular stages of phonetic or phonological

processing, other work has addressed more

detailed questions about differences in processing within

individual stages of linguistic analysis. For example, how

does processing differ between vowels and consonants

or even between different vowels? Much of this work

has used ERP, MEG, or even direct cortical stimulation.

Boatman and colleagues (1997) examined the effects

of direct cortical electrical interference on consonant

and vowel discrimination using implanted subdural

electrode arrays. With electrical interference, consonant

discrimination was impaired at one electrode site in

each patient on the superior temporal gyrus of the lateral

left perisylvian cortex. Without electrical interference,

consonant-vowel discrimination was intact and

vowel and tone discrimination remained relatively intact

when tested with electrical interference at the same site.

Rather interestingly, the crucial sites were located differently

in different patients. This suggests that within these

anatomical areas there are individual differences in the

details of functional localization. Such differences could

reflect either innate structural differences or different

outcomes of a learning process.

Given the considerable crosslinguistic variation in

phonemic inventories, both in the number and the nature

of the phonemic categories, learning must play

some role in the establishment of phonemic categories.

Using both ERPs and magnetoencephalographic recordings,

Näätänen and colleagues (1997) compared

processing of vowels by Finnish and Estonian listeners.

They measured both the electrical (MMN) and the magnetic

mismatch negativity (MMNM or magnetic mismatch

field: MMF) response to a set of four vowels. For

the Estonian listeners, the four vowels all corresponded

to prototypical Estonian vowels. For the Finnish listeners,

only three of the four vowels corresponded to

prototypical vowels in their language. Listeners were

presented with the phoneme /e / and, infrequently, with

one of the other three vowels to elicit a mismatch response.

For the Finnish listeners, there was a much

larger mismatch negativity when the infrequent vowel

was the Finnish /ö / than when it was a nonprototypical

vowel (the Estonian /ő /), even though the /ő / is actually

more dissimilar to the /e / phoneme in terms of

formant structure. Estonian listeners showed large mismatch

responses to both /ö / and /ő /. In contrast to this

phonemically determined response in the MMN amplitude,

MMN latency was a function solely of the degree

of acoustic dissimilarity of the infrequent stimulus. For

the Finnish listeners the magnetic mismatch negNORRIS

AND WISE: PRELEXICAL PROCESSES 873

ativity (MMNM) response was larger in the left

hemisphere than the right when the infrequent phoneme

was a Finnish prototype vowel. In the left hemisphere

the MMNM originated in the auditory cortex,

but in the right hemisphere the responses were not

strong enough to reliably localize the source of the response.

Other studies have examined the neuromagnetic responses

N100m (or N1m or M100), P200m, and SF (sustained

field), which are the magnetic analogs of the

electrical responses N100, P200, and SP (sustained potential).

By combining MEG and MRI, the source of the

N100m evoked by pure tones is known to lie on the surface

of the Heschl gyri, which include primary auditory

cortex (Pantev et al., 1990).

Poeppel and colleagues (1997) measured the N100m

response to vowels varying in pitch and to pure tones.

The N100m dipole localizations in supratemporal auditory

cortex were the same for vowels and pure tones.

They found no differences in N100m amplitude due to

vowel type or pitch. However, response latency was influenced

by vowel type but not by pitch. Response latency

thus appears to be sensitive to vowel type, but not

to pitch. This suggests that processing in supratemporal

auditory cortex is already extracting pitch-invariant phonetic

properties. Aulanko and colleagues (1993) used the

syllables /bć / and /gć / in a mismatch paradigm where

the syllables were synthesized on 16 different pitches.

They also found that MMNM responses (localized to the

supratemporal auditory cortex) were maintained despite

the variations in pitch. In another MEG study Diesch and

co-workers (1996) looked at dipole localizations of

N100m and SF deflection in response to the German

vowels /a /, /ć/, /u /, /i /, and /ř/. Here, too, there was

considerable intersubject variability in the locations of

the sources, but the ordering of the distances between

N100m and SF equivalent dipole locations was much

more systematic and could be interpreted as reflecting

distances in vowel space or featural representations of the

vowels.

Listening to words

In imaging studies, listening to words without an explicit

task demand produces strong activations in bilateral dorsolateral

temporal cortex (DLTC) that is both extensive

and symmetrical (figure 60.1A; see Petersen et al., 1988;

Wise et al., 1991; Binder et al., 1994). This symmetry

seems to be at odds with the “dominance” of the left hemisphere

for heard word perception; psychophysical and

psychological evidence suggests that the temporal resolution

required for analysis of the rapid frequency transitions

associated with consonants (occurring over < 50 ms) is dependent

on a neural system lateralized to the left hemisphere

(for review, see Fitch, Miller, and Tallal, 1997).

It has been suggested that the more constant acoustic

features of words, such as vowel sounds, might be analyzed

by the right hemisphere (Studdert-Kennedy and

Shankweiler, 1970). However, Lund and colleagues

(1986) found that left-hemisphere lesions, mainly located

in Wernicke's area, tended to disrupt vowel perception

whereas none of their patients with lesions in the

corresponding area of the right hemisphere had perceptual

problems.

Speech perception is robust, even when the sounds

are distorted in a variety of ways (e.g., Miller, 1951;

Plomp and Mimpen, 1979). No single cue seems to determine

the comprehensibility of speech and a listener

uses a range of acoustic features, which may explain

why word deafness (agnosia for speech in the absence of

aphasia) usually occurs only after bilateral lesions of dorsolateral

temporal cortex (DLTC) (Buchman et al., 1986;

Polster and Rose, 1998). Therefore, it is to be expected

that acoustic processing of speech input should involve

the DLTC of both hemispheres.

Using a parametric design, where the rate of hearing

single words was varied between 0 and 90 words per

minute (wpm), one PET study distinguished regions in

left and right DLTC that showed an approximately linear

relationship between activity and rate from a single

region, in the left posterior superior temporal gyrus

(postDLTC), where activity was close to maximal at 10

wpm (Price et al., 1992). This study set a precedent for

inferring a difference in processing from the shape of the

activity-input curve. It is true that another study—one using

fMRI to investigate left and right primary auditory

cortex (PAC) and postDLTC—did not reproduce this

original result (Dhankar et al., 1997); but this may be because

the preeminent interest in left postDLTC (the core

of “classic” Wernicke's area) may be misplaced. It has

become apparent from a number of imaging studies that

the DLTC anterior to PAC (midDLTC) is central to the

acoustic and phonological processing of heard words.

Three studies (Zatorre et al., 1992; Demonet et al., 1992,

1994) have contrasted phoneme monitoring in syllables

or nonwords with decisions on the pitch of stimuli (syllables

or tones). All three studies identified bilateral mid-

DLTC, although activation on the left was greater than

on the right for the detection of speech sounds. This emphasis

on midDLTC, and not postDLTC, in the prelexical

processing of words is also evident in the fMRI study

of Binder and co-workers (1996).

Although neurologists generally attribute a central

role in speech perception to Wernicke's area, support

for this from functional neuroimaging is mixed. Fiez

and colleagues (1996) and Petersen and colleagues

874 LANGUAGE

(1988, 1989) all found activation of Wernicke's area

(Brodmann's area 22 close to the temporoparietal junction)

when comparing passive word listening with fixation.

Interestingly, Fiez and co-workers (1996) also

found no differences between words and nonwords.

They acknowledge that this could be due to phonological

analysis, lexical activation, or perhaps to phonological

storage. However, Fiez and colleagues (1995) and

Zatorre and colleagues (1992) failed to find temporoparietal

activation, even though Fiez's group examined

both active and passive listening tasks, and did find temporoparietal

activation when comparing listening to

tones with a fixation task. Binder and co-workers (1996)

demonstrated that activation in the left planum temporale

was similar for tones and words, and there was

greater activity for tones in an explicit task on the stimuli.

However, as we have discussed, a response to nonlinguistic

stimuli does not preclude the possibility that a

region is specialized for a linguistic purpose.

As noted, midDLTC asymmetry in studies using phoneme

monitoring may reflect modulation of DLTC activity

by attentional processes rather than “dominance”

of the left temporal lobe in prelexical processing. A contrast

of “passive” listening to words with listening to signal-

correlated noise (SCN—acoustically complex sounds

without the periodicity of words) demonstrated symmetry

of DLTC function: The rates of hearing both words

and SCN correlated with cerebral activity in PAC and

adjacent periauditory cortex of both hemispheres, and

correlations specific to words were located symmetrically

in left and right midDLTC (Mummery et al., in

press). Frontal activations were absent. Although it cannot

be inferred that symmetrical PET activations imply

symmetrical processing functions, these results do support

single-case studies suggesting that the DLTC of

both hemispheres is involved in the acoustic and phonological

processing of words (Praamstra et al., 1991).

Another, more natural demand on auditory attention

is made when a subject has to “stream out” a particular

source of speech in a noisy environment, the usual example

cited being the cocktail party. Auditory stream segregation

(Bregman, 1989) is open to investigation with

functional neuroimaging, although no studies, to the authors'

knowledge, have as yet been published. However,

the effects of another source of speech sounds, one's own

voice, has been investigated. Attention to one's own articulated

output will be variable, depending on how carefully

a speaker wishes to use on-line, post-articulatory

self-monitoring in the detection and correction of a wide

range of potential speech errors (Levelt, 1989). It is assumed

that the same processors that analyze the speech

of others are used to monitor one's own voice. This has

been confirmed by a number of PET studies (Price et al.,

1996; McGuire et al., 1996). However, studies in humans

and monkeys with single-cell recordings have demonstrated

modulation of temporal cortical activity by phonation/

articulation (Müller-Preuss and Ploog, 1981;

Creutzfeldt, Ojemann, and Lettich, 1989). Figure 60.2

demonstrates a comparable result at the local systems

level in a PET study that investigated variable rates of listening

and repeating in normal subjects. When listening,

each subject heard word doublets, with each word heard

a second time after an interval of 500 ms. Therefore, during

both the listening and repeating conditions the subjects

heard the same word twice, although in the former

both words of each pair came via headphones while in

the latter the second word of each pair was the subject's

own voice. During repeating, particularly at high rates,

the articulated output must be discriminated from the

stimuli so as not to interfere with the acoustic and phonological

analysis of the latter. When activity in left periauditory

cortex was plotted against the rate of hearing

words during both the listening and repeating tasks,

there was some separation of the curves (figure 60.2A);

but when plotted against the rate of hearing the stimuli

alone (figure 60.2B), there was no evidence of modulation

(i.e., suppression) of the response to the stimuli by

articulation. The small additional contribution of own

voice to activity may be explained by a general reduction

of attention to this source of sound, and by the attenuation

of the higher tones of own voice because of

transmission to the middle ear by bone conduction. In

contrast, the same activity-rate plots show suppression of

activity in response to the stimuli in left midDLTC by articulation

(figure 60.2C and D), the physiological expression

of auditory streaming during repeating. Postarticulatory

self-monitoring is likely to be minimal when

repeating single words. Manipulating the complexity of

speech output could be used to test the hypothesis that

varying the demand on post-articulatory self-monitoring

correlates with activity in left midDLTC, which would

confirm modulation of activity in this region by attention

towards one's own speech output.

A further study has assessed the modulation of DLTC

activity by articulation (Paus et al., 1996). Subjects whispered

syllables at rates varying from 30 to 150 per minute

(any residual sound from the subject's larynx was masked

by white noise), and increasing motor activity was associated

with an increase in activity in the left planum temporale

and left posterior perisylvian cortex, attributed to

motor-to-sensory discharges. Such discharges may allow

listeners to rapidly segregate their own articulations from

the simultaneous speech of others.

So far, two activity-rate responses have been recorded

in DLTC in response to hearing single words.

The first reaches a maximum at relatively low rates of

NORRIS AND WISE: PRELEXICAL PROCESSES 875

word presentation and there is little increase in activity

for higher rates of hearing words (figure 60.3); this was

the response reported by Price and colleagues (1992) in

left postSTG and approximately describes the behavior

of left midDLTC in figure 60.2C. The second is one of

increasing, if progressively diminishing, activity up to a

rate of ~90-100 wpm, seen in left periauditory cortex

in figure 60.2A; but at higher rates activity diminishes

(figure 60.3), as observed by Dhankar and colleagues

(1997). Cortical responses are dependent on both the

local neural architecture and those of subcortical structures

that directly or indirectly (via polysynaptic pathways)

project to DLTC. All neural systems have a

refractory period after receiving an input, during which

time further input cannot be processed. The curves in

figure 60.3 originate from the behavior of the local neural

subsystem plus or minus its interaction with other

local (cortical and subcortical) subsystems. The shape

of these, and other, response curves in DLTC could be

subjected to signal modeling; although these models

FIGURE 60.2 (A) The percentage increase of activity (regional

cerebral blood flow) plotted against the rate of hearing words

during listening (open squares) and repeating (closed circles) in

left periauditory cortex. The baseline activity (closed square)

was measured when the subjects were expecting to hear stimuli

but received none during the period of data acquisition.

During the listening task the words were heard as doublets

(each word was repeated after a delay of 500 ms); during repeating

the stimulus words were only heard once, but the rate

of hearing words was the same as that in the listening task, as

the subjects heard their articulated responses. The response

curves showed activity to be a little less during repeating than

listening, but this did not reach significance. (B) The same plot

as (A), but the ordinate is the rate of hearing stimuli; therefore,

during repeating the range is half that during repeating, as

hearing own voice is excluded in this analysis. Activity in response

to external input is approximately matched in the two

conditions—the slightly greater activity associated with repeating

reflects a small contribution from own voice. (C) The same

plot as in (A), but for left midDLTC. There was a significantly

lower activity (p < .05, corrected for analysis of the whole

brain volume) for repeating compared to listening. (D) The

same plot as (B): Activity was modulated (suppressed) by articulation,

so that net synaptic activity was reduced even in response

to the external stimuli. This demonstrates an overall

reduction of responsiveness of this local neural system, interpreted

as a “focusing” of prelexical processing on the stimuli

and not the articulated output.

876 LANGUAGE

will inevitably be simplifications, analyses of response

curves is potentially a powerful way of observing neuromodulation

and rapidly or slowly evolving neural

plasticity. Thus, instead of subtraction analysis between

observations made in two behavioral states to decide

whether a local system is “on” or “off,” parametric designs

with a varying input could be used to observe

changes in the response curves of cortical and subcortical

subsystems: for example, when the subject is or is

not attending to the stimuli; during the course of learning/

habituating to an executive task on the stimuli; and,

translating into clinical research, during the course of

recovery following a stroke, either occurring naturally

or as the result of a particular therapy, in perilesional or

remote cortex. The relatively high signal:noise ratio in

functional images of the prelexical systems of DLTC

make these good candidates for such research.

Seeing words: PET and fMRI studies

It has been known for more than a century that normal

subjects can recognize single words as fast as single letters

(Cattel, 1886). Therefore, the letters of a written

word are perceived in parallel, and are not processed serially.

The neuropsychological literature explains the alexia

accompanying left occipital lesions in terms of

impaired letter form discrimination, parallel letter identification,

whole word form recognition, or visual attentional

processes (for review, see Behrmann, Plaut, and

Nelson, in press). The resulting “pure alexia” is associated

with an increase in reaction time as the number of

letters in a word increases (the word length effect).

Acuity in discerning a word's constituent letters is dependent

on foveal vision, which extends 1° to either side

of fixation. Acuity in parafoveal vision, extending 5° to

either side of fixation, rapidly declines with greater distance

from fixation. Nevertheless, vision from this part

of the retina provides important information about the

overall shape and length of words (Rayner and Bertera,

1979). It has been determined that the spatial extent of a

subject's perceptual span during reading is asymmetric;

it covers only 3-4 characters to the left but 12-15 characters

to the right for a left-to-right reader (McConkie and

Rayner, 1975), with presumably the opposite asymmetry

for readers of Arabic, Farsee, and Hebrew. There is also

a temporal component, with the duration of fixation on

an individual word within a text lasting ~250 ms. A saccade,

ending 7-9 characters to the right, moves the fixation

point to the viewing point of the next word. In this

way, text information is acquired rapidly, without discontinuities

and the need for regressive saccades to fill in

perceptual gaps.

Reduced right foveal/parafoveal visual information

impairs text reading (“hemianopic” alexia; Zihl, 1995),

and eye movement recordings have shown this to be

due to disorganization of the spatial and temporal components

of the perceptual span: Saccades may be too

short or long, with frequent regressive saccades to fill in

perceptual gaps. Text reading speed correlates with the

number of degrees of sparing of the 5° of right foveal/

parafoveal vision.

It is probable that there is no clear clinical division between

the psychophysicists' “hemianopic” alexia and

the neuropsychologists' “pure” alexia, and it is varying

proportions of impairments in perceptual and temporal

span, letter identification, and attentional processes that

result in the slow reading of any particular patient. As

both conditions accompany left occipital infarcts, the exact

distribution of the lesion must affect one of two prelexical

neural systems involved in perceiving text: one

responsible for letter and whole word identification, and

the other for rightward-directed attention (for European

languages) controlling reading saccades. Functional neuroimaging

studies have shown that the perception of letter

strings and words activate posterior striate cortex (the

receptive field for foveal vision) and prestriate cortex

(Petersen et al., 1990; Rumsey et al., 1997; Price, Wise,

and Frackowiak, 1996). As observed with auditory cortex,

there is no evidence for asymmetry, although only

left occipital lesions result in alexia. However, global alexia,

which includes an inability to recognize single letters,

usually occurs only when a left occipital lesion is

accompanied by disconnection of right striate/prestriate

cortex from the mirror regions on the left (Binder and

Mohr, 1992). The presence of a complete, macular-

FIGURE 60.3 A representation of the two types of activity-rate

responses (open squares and closed circles) in DLTC so far reported

in the literature.

NORRIS AND WISE: PRELEXICAL PROCESSES 877

splitting hemianopia (with no visual information about

letter/word form reaching the left striate cortex) does

not preclude the ability to read single words; therefore,

orthographic information can be processed in the right

occipital lobe sufficient to support reading once the information

is transferred to the left hemisphere via the

corpus callosum.

The neural system for letter and word identification is

shown in figures 60.1B and 60.4A. It has been shown

that activity in these regions increases with increasing

rate or duration of seeing single words (Price, Moore,

and Frackowiak, 1996). Petersen and co-workers (1990)

demonstrated that most of this cortex responded similarly

to words, letter strings, and letter-like symbols,

with the exception of a region in left prestriate cortex

which responded only to words and pronounceable

nonwords; they concluded that this was the location of

the visual whole word form system. Subsequently, in a

study of different design, Howard and colleagues (1992)

located the word form system in the left posterior temporal

lobe. This sharp phrenological distinction, on the

basis of contrasts on observations made during two

behavioral conditions, may be misleading, and it is

unlikely that the more central “black boxes” of information

processing models of language are represented as

anatomically discrete cortical regions; realization of visual

word form is perhaps better viewed as a distributed

system between left prestriate and left posterior temporal

cortex.

The neural system involved in word identification by

controlling attention and eye movements across text is

shown in figure 60.4B,C. This involves left striate cortex

(V1, in the depth of the calcarine sulcus) receiving information

from right parafoveal visual space (for left-toright

readers) (figure 60.4B), and posterior parietal cortex

(PPC) and the frontal eye fields (FEF, right >> left)

(figure 60.4C). Activity in these regions is not affected by

the rate of presentation of single words, but is apparent

when contrasting reading across horizontal arrays of

words with reading single words presented at the same

rate. PPC and FEF activations are apparent in other

studies that have investigated directed visual attention

(Corbetta, 1998). The parafoveal V1 activation is consistent

with visual attention's being directed to the right of

fixation when reading text left-to-right, and is a demonstration

that visual attentional processes, at least during

text reading, may modulate activity at the level of primary

visual cortex. This is in contrast to the combined

PET and event-related potential studies which have suggested

that attention directed toward objects in visual

space normally acts at the level of prestriate cortex (for

example, see Heinze et al., 1994).

FIGURE 60.4 Activations coregistered onto axial MRI slices

from the SPM96 template (the coordinate marks are in red).

(A) Viewing single words, as in figure 60.1B. (B) Reading

across horizontal word arrays (3 and 5 words) contrasted with

viewing single words at the same rate, coregistered onto an

MRI axial image 16 mm dorsal to the image in (A). This subtraction

reveals the activation in the representation of right

parafoveal space in the left striate cortex, demonstrating the

way that visual attention during text reading modulates activity

in V1. (C) The same behavioral contrast as in (B), but 55 mm

dorsal to the plane depicted in (B). This shows bilateral PPC

and right FEF activations associated with the planning and

generation of forward saccades during reading across horizontal

arrays of words.

878 LANGUAGE

Conclusions

Much of the functional neuroimaging of language has

been involved with responses in multimodal association

cortex, which has on occasion been bedeviled by inconsistency

of results across studies and debates about

whether activations directly reflect language processing

itself or represent a parallel process, such as working

memory, involved in the performance of the tasks being

used. Prelexical processes return stronger signals, and

manipulating any one of a number of the physical properties

of seen or heard verbal input, with or without the

explicit attention of the subject, is an approach that has

yet to be fully exploited. One of the more interesting applications

will be to show changes in physiological responses

over a series of scans in the same individual,

particularly in relation to clinical questions directed at

the processes underlying stroke recovery and post surgical

adaptations to a cochlear implant (Okazawa et al.,

1996).

NOTE

1. Note that in the imaging literature the term “phonological”

is used to cover a far broader range of processes than simply

the phonological component of spoken word recognition.

In general, imaging studies of phonological processing

have examined a range of tasks involving the manipulation

and storage of phonological representations. Very little of

this work can claim to have identified specifically linguistic

areas of phonological processing (see Poeppel, 1996, for a

critique of this work). The choice of tasks in this work often

makes it difficult to relate imaging studies to standard distinctions

in either the cognitive or linguistic literature.

REFERENCES

ANDREWS, S., 1989. Frequency and neighborhood effects on

lexical access: Activation or search? J. Exp. Psychol.: Learn.

Mem. Cogn. 15:802-814.

ARCHANGELI, D., 1984. Underspecification in Yawelmani Phonology.

Doctoral dissertation, Cambridge, Mass.: MIT Press.

AULANKO, R., R. HARI, O. V. LOUNASMAA, R. NÄÄTÄNEN,

and M. SAMS, 1993. Phonetic invariance in the human auditory

cortex. Neuroreport 30:1356-1358.

BEHRMANN, M., D. C. PLAUT, and J. NELSON, in press. A

meta-analysis and new data supporting an interactive account

of postlexical effects in letter-by-letter reading. Cogn.

Psychol.

BINDER, J. R., J. A. FROST, T. A. HAMMEKE, S. M. RAO,

and R. W. COX, 1996. Function of the left planum temporale

in auditory and linguistic processing. Brain 119:1239-

1247.

BINDER, J. R., and J. P. MOHR, 1992. The topography of callosal

reading pathways. A case-control analysis. Brain

115:1807-1826.

BINDER, J. R., S. M. RAO, T. A. HAMMEKE, F. Z. YETKIN, A.

JESMANOWICZ, P. BADETTINI, E. WONG, L. ESTKOWSKI, M.

GOLDSTEIN, V. HAUGHTON, and J. HYDE, 1994. Functional

magnetic resonance imaging of human auditory cortex. Ann.

Neurol. 35:662-672.

BOATMAN, D., C. HALL, M. H. GOLDSTEIN, R. LESSER, and B.

GORDON, 1997. Neuroperceptual differences in consonant

and vowel discrimination: As revealed by direct cortical

electrical interference. Cortex 33:83-98.

BREGMAN, A. S., 1990. Auditory Scene Analysis: The Perceptual

Organization of Sound. Cambridge, Mass.: MIT Press.

BUCHMAN, A. S., D. C. GARRON, J. E. TROST-CARDAMONE,

M. D. WICHTER, and M. SCHWARTZ, 1986. Word deafness:

One hundred years later. J. Neurol. Neurosurg. Psychiat.

49:489-499.

CATTEL, J. M., 1886. The inertia of the eye and brain. Brain

8:295-312.

CORBETTA, M., 1998. Frontoparietal cortical networks for directing

attention and the eyes to visual locations: Identical,

independent, or overlapping neural systems? Proc. Natl.

Acad. Sci. U.S.A. 95:831-838.

CREUTZFELDT, O., G. OJEMANN, and E. LETTICH, 1989. Neuronal

activity in the human lateral temporal lobe. II. Responses

to the subject's own voice. Exp. Brain Res. 77:476-

489.

DEMONET, J. F., F. CHOLLET, S. RAMSAY, D. CARDEBAT, J. L.

NESPOULOUS, R. WISE, A. RASCOL, and R. FRACKOWIAK,

1992. The anatomy of phonological and semantic processing

in normal subjects. Brain 115:1753-1768.

DEMONET, J. F., C. PRICE, R. WISE, and R. FRACKOWIAK,

1994. A PET study of cognitive strategies in normal subjects

during language tasks. Influence of phonetic ambiguity and

sequence processing on phoneme monitoring. Brain 117:

671-682.

DHANKAR, A., B. E. WEXLER, R. K. FULBRIGHT, T. HALWES,

A. M. BLAMIRE, and R. G. SHULMAN, 1997. Functional magnetic

resonance imaging assessment of the human brain auditory

cortex response to increasing word presentation rates.

J. Neurophysiol. 77:476-483.

DIESCH, E., C. EULITZ, S. HAMPSON, and R. ROSS, 1996. The

neurotopography of vowels as mirrored by evoked magnetic

field measurements. Brain Lang. 53:143-168.

DIJKSTRA, T., A. ROELOFS, and S. FIEUWS, 1995. Orthographic

effects on phoneme monitoring. Can. J. Exp. Psychol. 49:264-

271.

DONNENWERTH-NOLAN, S., M. K. TANENHAUS, and M. S.

SEIDENBERG, 1981. Multiple code activation in word recognition:

Evidence from rhyme monitoring. J. Exp. Psychol.:

Hum. Learn. Mem. 7:170-180.

EGGERMONT, J. J., 1995. Representation of a voice onset time

continuum in primary auditory cortex of the cat. J. Acoust.

Soc. Am. 98:911-920.

FIEZ, J. A., M. E. RACHLIE, D. A. BALOTA, P. TALLAL, and

S. E. PETERSEN, 1996. PET activation of posterior temporal

regions during auditory word presentation and verb

generation. Cereb. Cortex 6:1-10.

FIEZ, J. A., M. E. RACHLIE, F. M. MEIZIN, S. E. PETERSEN, P.

TALLAL, and W. F. KATZ, 1995. PET studies of auditory and

phonological processing: Effects of stimulus characteristics

and task demands. J. Cogn. Neurosci. 7:357-375.

FITCH, R. H., S. MILLER, and P. TALLAL, 1997. Neurobiology

of speech perception. Annu. Rev. Neurosci. 20:331-353.

FORSTER, K. I., and M. TAFT, 1994. Bodies, antibodies, and

neighborhood density effects in masked form priming. J.

Exp. Psychol.: Learn. Mem. Cognit. 20:844-863.

NORRIS AND WISE: PRELEXICAL PROCESSES 879

FRISTON, K. J., J. ASHBURNER, C. D. FRITH, J.-B. POLINE, J. D.

HEATHER, and R. S. J. FRACKOWIAK, 1995a. Spatial registration

and normalization of images. Hum. Brain Mapp. 2:165-

188.

FRISTON, K. J., A. P. HOLMES, K. J. WORSLEY, J.-B. POLINE,

C. D. FRITH, and R. S. J. FRACKOWIAK, 1995b. Statistical

parametric maps in functional imaging: A general linear

approach. Hum. Brain Mapp. 2:189-210.

GRAINGER, J., and A. M. JACOBS, 1996. Orthographic processing

in visual word recognition: A multiple read-out model.

Psychol. Rev. 103:674-691.

HEINZE, H. J., G. R. MANGUN, W. BURCHERT, H. HINRICHS,

M. SCHOLZ, T. F. MÜNTE, A. GOS, M. SCHERG, S.

JOHANNES, H. HUNDESHAGEN, M. S. GAZZANIGA, and S. A.

HILLYARD, 1994. Combined spatial and temporal imaging

of brain activity during visual selective attention in humans.

Nature 372:543-546.

HIRANO, S., Y. NAITO, H. OKAZAWA, H. KOJIMA, I. HONJO,

K. ISHIZU, Y. YENOKURA, Y. NAGAHAMA, H. FUKUYAMA,

and J. KONISHI, 1997. Cortical activation by monaural

speech sound stimulation demonstrated by positron emission

tomography. Exp. Brain Res. 113:75-80.

HOWARD, D., K. PATTERSON, R. WISE, W. D. BROWN, K.

FRISTON, C. WEILLER, and R. FRACKOWIAK, 1992. The cortical

localization of the lexicons. Brain 115:1769-1782.

KIPARSKI, P., 1985. Some consequences of lexical phonology.

Phonology Yearbook 2:85-138.

LAHIRI, A., and W. MARSLEN-WILSON, 1991. The mental representation

of lexical form: A phonological approach to the

recognition lexicon. Cognition 38:245-294.

LESCH, M. F., and A. POLLATSEK, 1998. Evidence for the use

of assembled phonology in accessing the meaning of printed

words. J. Exp. Psychol.: Learn. Mem. Cognit. 24:573-592.

LEVELT, W. J. L., 1989. Speaking: From Intention to Articulation.

Cambridge, Mass.: MIT Press.

LUND, E., P. E. SPLIID, E. ANDERSEN, and M. BOJSEN-MOLLER,

1986. Vowel perception: A neuroradiological localization of

the perception of vowels in the human cortex. Brain Lang.

29:191-211.

MCCLELLAND, J. L., and J. L. ELMAN, 1986. The TRACE

model of speech perception. Cogn. Psychol. 18:1-86.

MCCLELLAND, J. L., and D. E. RUMELHART, 1981. An interactive

activation model of context effects in letter perception:

1. An account of the basic findings. Psychol. Rev. 88:

375-407.

MCCONKIE, G., and K. RAYNER, 1975. The span of the effective

stimulus during a fixation in reading. Percept. Psychophys.

17:578-586.

MCGUIRE, P. K., D. A. SILBERSWEIG, and C. D. FRITH, 1996.

Functional anatomy of verbal self-monitoring. Brain

119:907-917.

MCQUEEN, J. M., D. NORRIS, and A. CUTLER, 1994. Competition

in spoken word recognition: Spotting words in other

words. J. Exp. Psychol.: Learn. Mem. Cognit. 20:621-638.

MARSLEN-WILSON, W., and P. WARREN, 1994. Levels of perceptual

representation and process in lexical access: Words,

phonemes, and features. Psychol. Rev. 101:653-675.

MARSLEN-WILSON, W. D., and A. WELSH, 1978. Processing interactions

and lexical access during word recognition in continuous

speech. Cogn. Psychol. 10:29-63.

MAZZIOTTA, J. C., S.-C. HUANG, M. E. PHELPS, R. E. CARSON,

N. S. MACDONALD, and K. MAHONEY, 1985. A noninvasive

positron computed tomography technique using

oxygen-15 labeled water for the evaluation of neurobehavioral

task batteries. J. Cereb. Blood Flow Metab. 5:70-78.

MEHLER, J., 1981. The role of syllables in speech processing:

Infant and adult data. Phil. Trans. R. Soc. Lond. B 295:333-

352.

MILLER, G. A., 1951. Language and Communication. New York:

McGraw-Hill.

MÜLLER-PREUSS, P., and D. PLOOG, 1981. Inhibition of auditory

cortical neurons during phonation. Brain Res. 215:61-76.

MUMMERY C. J., S. K. SCOTT, J. ASHBURNER, and R. J. S.

WISE, in press. Functional neuroimaging of speech perception

in six normal and two aphasic subjects. J. Acoust. Soc.

Am.

NÄÄTÄNEN, R., 1990. The role of attention in auditory information

processing as revealed by event-related potentials

and other brain measures of cognitive function. Behav. Brain

Sci. 13:201-288.

NÄÄTÄNEN, R., A. LEHTOKOSKI, M. LENNES, M. CHEOUR, M.

HUOTILAINEN, A. IIVONEN, M. VAINIO, P. ALKU, R. J. ILMONIEMI,

A. LUUK, J. ALLIK, J. SINKKONEN, and K. ALHO, 1997.

Language-specific phoneme representations revealed by

electric and magnetic brain responses. Nature 385:432-434.

NORRIS, D. G., 1994. Shortlist: A connectionist model of continuous

speech recognition. Cognition 52:189-234.

NORRIS, D., J. M. MCQUEEN, and A. CUTLER, in press.

Merging information in speech recognition: Feedback is

never necessary. Behav. Brain Sci.

NORRIS, D., J. M. MCQUEEN, A. CUTLER, and S. BUTTERFIELD,

1997. The possible-word constraint in the segmentation

of continuous speech. Cogn. Psychol. 34:191-243.

OHL, F. W., and H. SCHEICH, 1997. Orderly cortical representation

of vowels based on formant interaction. Proc. Natl.

Acad. Sci. U.S.A. 94:9440-9444.

OKAZAWA, H., Y. NAITO, Y. YONEKURA, N. SADATO, S.

HIRANO, S. NISHIZAWA, Y. MAGATA, K. ISHIZU, N.

TAMAKI, I. HONJO, and J. KONISHI, 1996. Cochlear implant

efficiency in pre- and postlingually deaf subjects. A study

with H2

15O and PET. Brain 119:1297-1306.

PANTEV, C., M. HOKE, K. LEHNERTZ, B. LUTKENHONER, G.

FAHRENDORF, and U. STOBER, 1990. Identification of

sources of brain neuronal activity with high spatiotemporal

resolution through combination of neuromagnetic source localization

(NMSL) and magnetic resonance imaging (MRI).

Electroencephalogr. Clin. Neurophysiol. 75:173-184.

PATTERSON, R. D., M. H. ALLERHAND, and C. GIGUERE,

1995. Time-domain modelling of peripheral auditory processing:

A modular architecture and a software platform. J.

Acoust. Soc. Am. 98:1890-1894.

PAUS, T., D. W. PERRY, R. J. ZATORRE, K. J. WORSLEY, and

A. C. EVANS, 1996. Modulation of cerebral blood flow in

the human auditory cortex during speech: Role of motorto-

sensory discharges. Eur. J. Neurosci. 8:2236-2246.

PEREA, M., and M. CARREIRAS, 1988. Effects of syllable frequency

and syllable neighborhood frequency in visual word

recognition. J. Exp. Psychol.: Hum. Perf. Cognit. 24:134-144.

PETERSEN, S. E., P. T. FOX, M. I. POSNER, M. MINTUN, and

M. E. RAICHLE, 1988. Positron emission tomographic

studies of the cortical anatomy of single-word processing.

Nature 331:585-589.

PETERSEN, S. E., P. T. FOX, M. I. POSNER, M. MINTUN, and

M. E. RAICHLE, 1989. Positron emission tomography studies

of the processing of single worlds. J. Cogn. Neurosci.

1:153-170.

880 LANGUAGE

PETERSEN, S. E., P. T. FOX, A. Z. SNYDER, and M. E. RAICHLE,

1990. Activation of extrastriate and frontal cortical areas by

words and word-like stimuli. Science 249:1041-1044.

PETERSON, G. E., and H. L. BARNEY, 1952. Control methods

used in the study of vowels. J. Acoust. Soc. Am. 24:175-184.

PLOMP, R., and A. M. MIMPEN, 1979. Speech-reception threshold

for sentences as a function of age and noise. J. Acoust. Soc.

Am. 66:1333-1342.

POEPPEL, D., 1996. A critical review of PET studies of phonological

processing. Brain Lang. 55:317-351.

POEPPEL, D., E. YELLIN, C. PHILLIPS, T. P. L. ROBERTS, H. A.

ROWLEY, K. WEXLER, and A. MARANTZ, 1996. Task-induced

asymmetry of the auditory evoked M100 neuromagnetic field

elicited by speech sounds. Cogn. Brain Res. 4:231-242.

POEPPEL, D., C. PHILLIPS, E. YELLIN, H. A. ROWLEY, T. P.

ROBERTS, and A. MARANTZ, 1997. Processing of vowels in

supratemporal auditory cortex. Neurosci. Lett. 221:145-148.

POLSTER, M. R., and S. B. ROSE, 1998. Disorders of auditory

processing: Evidence for modularity in audition. Cortex

34:47-65.

PRAAMSTRA, P., P. HAGOORT, B. MAASEN, and T. CRUL, 1991.

Word deafness and auditory cortical function. A case history

and hypothesis. Brain 114:1197-1225.

PRICE, C. J., C. J. MOORE, and R. S. J. FRACKOWIAK, 1996.

The effect of varying stimulus rate and duration on brain activity

during reading. Neuroimage 3:40-52.

PRICE, C. J., R. J. S. WISE, and R. S. J. FRACKOWIAK, 1996.

Demonstrating the implicit processing of visually presented

words and pseudowords. Cereb. Cortex 6:62-70.

PRICE, C., R. WISE, S. RAMSAY, K. FRISTON, D. HOWARD, K.

PATTERSON, and R. FRACKOWIAK, 1992. Regional response

differences within the human auditory cortex when listening

to words. Neurosci. Lett. 146:179-182.

PRICE, C., R. J. S. WISE, E. A. WARBURTON, C. J. MOORE, D.

HOWARD, K. PATTERSON, R. S. J. FRACKOWIAK, and K. J.

FRISTON 1996. Hearing and saying. The functional neuroanatomy

of auditory word processing. Brain 119:919-931.

PULLEYBLANK, D., 1983. Tone in Lexical Phonology. Doctoral dissertation,

Cambridge, Mass.: MIT.

RAYNER, K., and J. H. BERTERA, 1979. Reading without a

fovea. Science 206:468-469.

RAAP, B. C., 1992. The nature of sublexical orthographic organization:

The bigram-trough hypothesis examined. J. Mem.

Lang. 31:33-53.

READ, C. A., Y. ZHANG, H. NIE, and B. DING, 1986. The ability

to manipulate speech sounds depends on knowing alphabetic

reading. Cognition 24:31-44.

RUMSEY, J. M., B. HORWITZ, B. C. DONOHUE, K. NACE, J. M.

MAISOG, and P. ANDREASON, 1997. Phonological and orthographic

components of word recognition. A PET-rCBF

study. Brain 120:739-759.

SEIDENBERG, M. S., and M. K. TANENHAUS, 1979. Orthographic

effects on rhyming. J. Exp. Psychol.: Hum. Learn.

Mem. 5:546-554.

SINEX, D. G., L. P. MCDONALD, and J. B. MOTT, 1991. Neural

correlates of nonmonotonic temporal acuity for voice onset

time. J. Acoust. Soc. Am. 90:2441-2449.

STEINSCHNEIDER, M., C. E. SCHROEDER, J. C. AREZZO, and

H. G. VAUGHAN JR., 1995. Physiologic correlates of the

voice onset time boundary in primary auditory cortex (A1)

of the awake monkey: Temporal response patterns. Brain

Lang. 48:326-340.

STUDDERT-KENNEDY, M., and D. SHANKWEILER, 1970. Hemispheric

specialization for speech perception. J. Acoust. Soc.

Am. 48:579-594.

THULBORN, K. R., 1998. A BOLD move for fMRI. Nature Med.

4:155-156.

VAN ORDEN, G. C., 1987. A ROWS is a ROWS: Spelling,

sound, and reading. Mem. Cognit. 15:181-198.

WISE, R., F. CHOLLET, U. HADAR, K. FRISTON, E. HOFFNER,

and R. FRACKOWIAK, 1991. Distribution of cortical neural

networks involved in word comprehension and word retrieval.

Brain 114:1803-1817.

WOLDORFF, M. G., C. C. GALLEN, S. A. HAMPSON, S. A.

HILLYARD, C. PANTEV, D. SOBEL, and F. E. BLOOM, 1993.

Modulation of early sensory processing in human auditory

cortex during auditory selective attention. Proc. Natl. Acad.

Sci. U.S.A. 90:8722-8726.

WOLDORFF, M. G., S. A. HACKLEY, and S. A. HILLYARD, 1991.

The effects of channel-selective attention on the mismatch

negativity wave elicited by deviant tones. Psychophysiology

28:30-42.

WOLDORFF, M. G., and S. A. HILLYARD, 1991. Modulation of

early auditory processing during selective listening to rapidly

presented tones. Electroencephalogr. Clin. Neurophysiol. 79:

170-191.

ZATORRE, R. J., A. C. EVANS, E. MEYER, and A. GJEDDE,

1992. Lateralization of phonetic and pitch discrimination in

speech processing. Science 256:846-849.

ZATORRE, R. J., E. MEYER, A. GJEDDE, and A. C. EVANS,

1996. PET studies of phonetic processing of speech: Review,

replication, and reanalysis. Cereb. Cortex 6:21-30.

ZIHL, J., 1995. Eye movement patterns in hemianopic dyslexia.

Brain 118:891-912.



Wyszukiwarka

Podobne podstrony:
pharr homer and the study of greek
Of Corpses and Gold Materials for the Study of the Vetala and the Ro langs by Michael Walter Tibet
A comparative study of inverter and line side filtering schemes in the dynamic voltage restorer
Robert P Murphy Study Guide to the Theory of Money and Credit
Linguistics And Ideology in the Study of Language Koerner, E F K
Blanchard European Unemployment The Evolution of Facts and Ideas
On The Manipulation of Money and Credit
The Hound of?ath and Other Stories
The Repentance of Compassion and Enlightenment Path
Piórkowska K. Cohesion as the dimension of network and its determianants
1b The Literature of Discovery and Exploration
Laszlo, Ervin The Convergence of Science and Spirituality (2005)
Childhood Trauma, the Neurobiology of Adaptation, and Use dependent of the Brain
annex vi ext of the Protocol of 1997 and Annex VI
The Struggles of EMTs and EMS Workers
Book Review The Study of a Negro Policeman

więcej podobnych podstron