• Nenhum resultado encontrado

The neuroanatomic and neurophysiological infrastructure for speech and language (2014)

N/A
N/A
Protected

Academic year: 2020

Share "The neuroanatomic and neurophysiological infrastructure for speech and language (2014)"

Copied!
8
0
0

Texto

(1)

The

neuroanatomic

and

neurophysiological

infrastructure

for

speech

and

language

David

Poeppel

1,2

Newtoolsandnewideashavechangedhowwethinkaboutthe neurobiologicalfoundationsofspeechandlanguage

processing.Thisperspectivefocusesontwoareasofprogress. First,focusingonspatialorganizationinthehumanbrain,the revisedfunctionalanatomyforspeechandlanguageis discussed.Thecomplexityofthenetworkorganization underminesthewell-regardedclassicalmodelandsuggests lookingformoregranularcomputationalprimitives,motivated bothbylinguistictheoryandneuralcircuitry.Second,focusing onrecentworkontemporalorganization,apotentialroleof corticaloscillationsforspeechprocessingisoutlined.Suchan implementational-levelmechanismsuggestsonewaytodeal withthecomputationalchallengeofsegmentingnatural speech.

Addresses

1

DepartmentofPsychology,NewYorkUniversity,6WashingtonPlace, NewYork,NY10003,UnitedStates

2DepartmentofNeuroscience,Max-Planck-InstituteforEmpirical

Aesthetics,Gru¨neburgweg14,60322Frankfurt,Germany Correspondingauthor:Poeppel,David(david.poeppel@nyu.edu)

CurrentOpinioninNeurobiology2014,28:142–149

ThisreviewcomesfromathemedissueonCommunicationand

language

EditedbyMichaelBrainardandTecumsehFitch

http://dx.doi.org/10.1016/j.conb.2014.07.005

0959-4388/#2014PublishedbyElsevierLtd.Allrightreserved.

Introduction

Experimental research on the neurobiological

founda-tionsof speechand languageprocessinghas taken

con-siderable strides in the last decade, due in part to

advances in the methods availableto study thehuman

brain(improvedresolutionof recordingtechniques)and

in part to more theoretically motivated research that

buildsoncrucial distinctionsprovided bythe results of

linguistics, cognitive psychology and computer science

(improved‘conceptualresolution’).Astheneurobiology

of language matures, the units of analysis continue to

changeand becomeincreasinglyrefined: from (i)broad

(and somewhat pre-theoretical) categories such as

‘pro-duction’ versus ‘perception/comprehension’ to (ii)

sub-routinesoflanguageprocessingsuchasphonology,lexical

processing,syntax,semantics,andsoon,to(iii)evermore

fine-grained representations and computational

primi-tives argued to underpin the different subroutines of

language, such as concatenation, linearization, among

others.

In all areas of language processing, noteworthy new

perspectives have been developed (reviewed, among

manyothers,forexample,in[1–3],withspecialemphasis

onspeech,linguisticstructure-building, andthe

sensor-imotorbasisof speech/language,respectively).

Notwith-standingthenovelapproaches,manyof thesubstantive

challengesareonlynowbecomingclear.Thenumberand

arrangementofthecorticalandsubcorticalregions

under-pinningspeechandlanguageprocessingdemonstratethat

thesystemisconsiderablymorecomplexanddistributed;

theageofBroca’sandWernicke’sareasandtheeraof

left-hemisphereimperialismareover.HereIfocusonatwo

issues thatare redefiningthe research agenda,pointing

towards a computational neurobiology of language [4], a

research direction that emphasizes the representational

and computational primitives that form the basis of

speechandlanguage.

Thereare,ofcourse,manywaystoillustratetheprogress

that hasbeen made, highlighting new ideas and

direc-tions. One approach would be to review the different

aspectsor levelsof language processing thathave been

examinedinnewneuroscientificexperimentation,thatis,

phonetics,phonology[5,6],lexicalaccess[7–10],lexical

semantics [11], syntax [12,13], compositional semantics

[14,15],discourserepresentation[16,17];moreover,the

interaction of the linguistic computational system with

otherdomainshasbeeninvestigatedininterestingways,

includinghowlanguageprocessinginterfaceswith

atten-tion [18],memory [19],emotion [20],cognitive control

[21],predictivecoding[22–24],andevenaesthetics[25].

Adifferentapproachistakenhere,focusingfirstonthe

revisedspatialmap ofbrainandlanguage;then,

narrow-ingtoonefunctional problem,anew ‘temporalview’is

discussedto illustratealinkinghypothesisbetweenthe

computational requirements of speech perception and

the neurobiological infrastructure that may provide a

neuralsubstrate.

The

new

functional

anatomy:

maps

of

regions,

streams,

and

hemispheres

Our understanding of the anatomic foundations of

languageprocessinghaschangeddramaticallyinthelast

(2)

One might call this the maps problem [26], that is, the

challenge to define the best possible spatial map that

describes the anatomic substrate [27–29]. The older,

‘classical’ view and its limitations are discussed further

in Hagoort, this volume, where a contrasting dynamic

network viewoflocalfunctionisdescribed.

(a) Starting atthemostcoarselevel,considerthe roleof

hemisphericasymmetry.Historically,thelateralizationof

language processing to the ‘dominant hemisphere’ has

beenoneoftheprincipaldefiningfeatures.Itwas

uncon-troversialthatlanguageprocessingisstronglylateralized.

However,amorenuancedandtheoreticallyinformedview

oflanguageprocessing,breakingdowntheprocessesinto

constituent operations, has revealed that lateralization

patterns are complex and subtle—and that not all

language processing components are lateralized. For

example,whenexaminingthecortical regionsmediating

speechperceptionandlexicallevelcomprehension,lesion

[30,31], imaging [32–34], and electrophysiological data

[35,36]demonstrateconvincinglythatbothleftandright

superiortemporalcorticalregionsareimplicated.Indeed,

theoperationsmappingfrominputsignals(e.g.sound)to

lexical-levelmeaning,arguedtobepartofventralstream

processing(seeb,below)appeartoberobustlybilateral,as

illustratedinFigure1a(bottompanel).

Bycontrast,itistypicallyarguedthatthestructuresand

operationsunderlyingproduction,forexample,are

later-alized.AsillustratedinFigure1a,oneofthedorsalstream

projections, suggested to underpin the sensory-motor

mappingnecessaryforperception-productionalignment,

is depicted as fully lateralized. However, new data

acquired in pre-surgicalepilepsy patientsusing

electro-corticography(ECog)seriouslychallengeeventhis

gener-alization [37]. It is shown based on a range of tasks

requiring sensory (sound)-to-motor (articulatory)

trans-formation that thedorsal streamstructures thatprovide

the basis for this mapping are clearly bilateral as well

(Figure 1b). Other,non-speechdorsal stream functions,

for example operations that are part of grammatical

relations,maybesupportedbyother dorsalstream

pro-jections,andtheirlateralizationpatternhasnotbeenfully

established,althoughthereappearstobeafairdegreeof

lateralization to the dominant hemisphere (see [2] and

Hagoort, thisvolume).

Various other imaging and physiology experiments on

otheraspectsoflanguage[38,23]alsoinvitethe

interpret-ationthatlateralizationpatterns aremore complexthan

anticipated.Cumulatively,inotherwords,the

language-ready brain (to use a phrase of Hagoort, this volume)

appears to execute many of its subroutines bilaterally,

Figure1

aMTG, aITS (left dominant?)

Via higher-order frontal networks

pIFG, PM, anterior insula (left dominant)

pMTG, pITS (weak left-hemisphere bias)

Parietal-temporal Spt (left dominant) Dorsal stream Ventral stream Dorsal STG (bilateral) Mid-post STS

(bilateral) Widely distributed Input from other sensory modalities Listen-Speak Left Right 1 1.6 Sensory-motor x po w e r increase from baseline Production Auditory Sensory-motor +Auditory Left S-M Right S-M Frequency (Hz) Time from auditory onset (s) 500 0 0 1 2 3 4 Listen-Mime Listen Articulatory network

Spectrotemporal analysis Phonological network

Conceptual network Lexical interface Combinatorial network Sensorimotor interface (a) (b) (a) (b)

Current Opinion in Neurobiology

(a)Dualstreammodel[1].(b)Notethebilateralarrangementoftheventralstreamnetwork(mappingfromsoundtomeaning)andthelateralizeddorsal streamprojections(mappingtoarticulation).(b)Bilateralprocessingofsensory-motortransformationsforspeech,from[37].(c)Spectrogramsfor threetasks:A’Listen-Speak’task:subjectsheardawordandafterashortdelayhadtorepeatit;A‘Listen-Mime’task:subjectsheardawordandafter thesameshortdelayhadtomovetheirarticulatorswithoutvocalizing;‘Listen’task:subjectslistenedpassivelytoaword.Sensory-motor(S-M) responsesareseeninexampleelectrodesinboththeleft(toprow)andright(bottomrow)hemispheresasdemonstratedbyahighgammaneural response(70–90+Hz)presentbothwhenthesubjectlistenedtoawordandwhentheyrepeated/mimedit.(d)Populationaveragebrainswithactive electrodesdemonstratethatS-Mprocessesoccurbilaterally(red).Electrodesthatrespondedtothepassivelistenconditionarealsonotedwithgreen outlines(redwithgreenoutlines).

(3)

unlikethe150-year-olddogma.Andyet...Thereremain

compellingreasonswhy anexplanationforlateralization

offunctionisrequired.Lesionstotheleftversustheright

peri-Sylvian regions do not lead to the same linguistic

deficits, so a new approach to this classical issue is

necessary.

Whichoperationsarefunctionally lateralized(and why)

remainscontroversial.Ithasbeendebatedforthespeech

perception case, for instance [39–41], but thenature of

thequestionsisatleastfine-grained,thatistosayatthe

levelofcircuitsthatexecutespecific(possibly

network-dependent)computations[42].Onehypothesisthat

mer-itsexplorationisasfollows:theoperationsthatcomprise

the processing of input and output systems (the

inter-faces)arecarriedoutbilaterally;thelinguisticoperations

perse,beyondtheinitialmappings,andspecificallythose

thatrequirecombinatoricsandcomposition(COM)aswell

aslinguistically(structurally-)basedprediction(PRE)are

lateralized.A COM-PRE view predictsthat the neural

circuitsmediatingthosetypesofoperationsonlinguistic

datastructures are asymmetrically distributed; but how

suchcircuitsmightlookremainspurespeculation.

(b)Theorganization oflanguageregionswithina

hemi-sphere has also seen a major shift in perspective. The

classicalmodel—still theprevailingview inmost

text-books—identifiesafewcrucialareas(Broca’sand

Wer-nicke’sregions,oftentheangulargyrus,connectedbythe

arcuate fasciculus) and attributes entire chunks of

lin-guistic cognition to these large brain regions. (See

Hagoort, this volume, for critique.) There now exists

consensusthat a more likely organization involves

pro-cessingstreamsorganizedalongdorsalandventralroutes.

Thisisarguedtobethecaseforspeech[1,43,44],lexical

levelprocessing[45],syntacticanalysis[2],andsemantics

[46].

The existence of concurrent ‘what,’‘where,’ and ‘how’

streams in vision highlights how large-scale

compu-tational challenges (localization, identification,

sensori-motor transformation/action—perception linking) can

be implemented in parallel by streams consisting of

hierarchically organized cortical regions. This key idea

from visual neuroscience was adapted for speech and

languageinthepast10years[1,2,43,44].Figure1a

illus-tratesonesuchmodel,constructedtoaccountforarange

ofphenomenainspeech.Thebilateralventralstreamsare

consideredresponsibleforsupportingthetransformation

fromauditoryinputtolexicalrepresentations.Thedorsal

stream(oneofatleasttwoparalleldorsalstreams,see[2])

isprimarilycrucialforsensorimotor transformations.

(c)Crucialadvancesonlocalanatomy.Untilabout2000,

theinterpretation ofexperimentaldataimplicating

Bro-ca’sareawasmadeatalevelof analysisreferringtoleft

inferiorfrontalgyrusand,atbest,aboutBrodmannareas

44versus45.Thereexist someinteresting connectivity

studies, as well [47], but by and large the functional

anatomiccharacterizationhasbeencoarse.(Hagoort,this

volume, provides a more extensive discussion of the

functionalroleof Broca’sarea.)

Newanatomictechniques (imaging,

immunocytochem-istry)havenowbeenapplied.Datainaninfluentialpaper

[48] show that the organization of this one cortical

regionismuchmorecomplex,incorporating(depending

how one counts) 5–10 local fields that differ in their

cellular properties. Figure 2a [48] illustrates the

cur-rentlyhypothesizedorganizationofBroca’sregion,

high-lightinganteriorandposterioraswellasdorsalandventral

subdivisionsofknownfieldsand pointingto additional,

separateopercularandsulcalregionsthatareanatomically

identifiable.

Itishighlylikely thateach sub-regionperforms atleast

onedifferentcomputation—afterall,theyare

anatomi-cally distinct. Suppose then that there are merely five

anatomicsubdomainsinBroca’sregion,andsupposethat

each anatomic subdomain supports, say, two

compu-tations. These are conservative numbers, but we must

thenstillidentify10computations(underestimatingthe

actualcomplexity).Someofthecomputationswillapply

toany inputdatastructure (structuredlinguistic

repres-entation,speechsignal,actionplan,rhythmic sequence,

amongothers),sinceitisestablishedthatBroca’sregionis

engaged bynumerous cognitive tasks [49].Other

com-putationsmaybededicated to datastructures native to

the linguistic cognitive system. We know little about

precisely what kind of conditions need to be met to

triggerselectivelythemanyareasthatconstitutethispart

oftheinferiorfrontalcortex.

(d) A final point about anatomy: the vast majority of

research on brain and language has focused on the

traditionalperi-Sylvianlanguageareas;butbothcortical

andsubcorticalregionsthathavenotbeeninvestigated

beforeasmuchplayacrucialrole.(Thispointis

ampli-fied,aswell,inHagoort,thisvolume.)Two regions,in

particular,deserve emphasiswithrespect to linguistic

computation:theleftanteriortemporallobe(ATL),and

theposteriormiddletemporalgyrus(pMTG).Theleft

ATLhasbeenimplicatedinavarietyofexperimentson

elementary structure assembly, that is, the primitive

combinatorics (COM) that underlie syntactic and

semanticstructurebuilding.fMRIdatashowquite

con-sistentlythatcombinationintophrasalunits,perhapsin

the service of subsequent semantic interpretation, is

mediatedthere[2].RecentMEGdata[14,15]provide

evidenceaboutthedynamics;theexperiments

demon-stratethatbetween200and300msaftertheonsetofa

crucialwordthatcanbecombinedwithapreviousitem,

thisregionreflectstheexecutionofbasiccombinatoric

(4)

The pMTG appears to bea cruciallexical node in the

larger network of language areas—and heavily

con-nected to lATL. This region is,in general, very richly

connectedtootherlanguageareas(seeFigure2b,[50])

and is drivenby many lexicaltasks in imagingstudies,

includinglexicalaccess,lexicalambiguityresolution,and

otherprocesses.LesionstotheMTGleadtosevereword

comprehension difficulties[51]andprofoundaphasia, if

paired with STG lesions. Recent MEG data on the

dynamicsinpMTGduringlexicalaccessshowthat,after

initialactivationoflexicaltargets,unambiguouslylexical

attributes such as surface frequency and neighborhood

densityelicitMTGmodulation[52]nolaterthan350ms

after wordonset.

Insum, thefunctionalanatomyof speechandlanguage

looks quite different now than the classical model has

taughtus.Locally highlystructuredareasareconnected

intonetworksthatarethemselvesorganizedinto

proces-sing streams that support broader computational goals

(e.g. sound to meaning mapping, sound to articulation

mapping)andthat are,inturn, bilaterallysupportedfor

manylinguisticfunctions.Theresearchprogram forthe

next years will ask aboutwhich basiccomputationsare

executed in the most local circuits and which

compu-tations then group together to generate the linguistic

functions that constitute language, say,syntax,

phonol-ogy,lexicalaccess,orsemantics.Structurebuilding—be

it for sound or meaning [14,53,54]—requires the

Figure2 prcs cs prcs prcs prcs AF direct (a) (b) AF indirect ILF op8 op9 op6 op4 6r1 44v 45a 45p ifs1 ifs2 ifj1 ifj2 6v2 6v1 4 cs ds ds ab ab hb hb ifs ifs cscs 44d IOFF MdLF Tapetum

Current Opinion in Neurobiology

(a)Broca’sregion,from[48].Notethenumeroussubdivisionsintheregion.(b)MajorpathwaysassociatedwithleftMTG,from[50].Tractography datafromtwosubjectsareshow.Eachrowdepictstheindividualsubject’sROI,representedintheirnativespace(left,yellow).Sagittal,axial,and coronalslicesofthefiberbundlesinvolvedareshown.AsperDTImapping,leftMTGismassivelyconnectedtocruciallanguageregionsinahub-like manner.

(5)

assembly of elementary representations by elementary

computations,anditisultimatelythesebasicoperations

we are seeking to identify in a new neurobiology of

language.

The

temporal

view:

a

linking

hypothesis

for

the

neurobiology

of

speech

perception

Fornearlyacentury,speechperceptionhasbeenstudied

primarily from the perspective of the acoustic signal.

Spectralpropertieswerethoughttoprovidetheprincipal

cues for decoding phonemes, and from these building

blockswordsandphraseswouldbederived.Incognitive

neurosciencestudies,thebasicintuitionsofsuchamodel

are investigated regularly, testing which

acoustic-pho-neticprimitivesareessentialandinvariant,acoustic

attri-butes[6],distinctivefeatures[5,6,55,56],orphonemes

[57]. Despite the successes of this research program,

manyissueshaveremainedunanswered.Onemajorissue

concerns how the brain extracts the relevant units for

analysistobeginwith.

Speechandotherdynamicallychangingauditory signals

(aswellasvisualstimuli,includingsign)containcrucial

informationrequiredforsuccessfuldecodingthatis

car-ried at multiple temporal scales (e.g. intonation-level

information atthe scale of 500–2000ms, syllabic

infor-mationthatiscloselycorrelatedtotheacousticenvelope

of speech, 150–300ms, and rapidly changing featural

information,20–80ms).Thesedifferentaspectsof

sig-nals(slowandfasttemporalmodulation,frequency

com-position) must be analyzed for successful recognition.

Recentresearch hasaimed to identify thebasisfor the

requiredmulti-timeresolutionanalysis[58,72].Aseriesof

recentexperimentssuggeststhatintrinsicneuronal

oscil-lationsincortexat‘privileged’frequencies(delta1–3Hz,

theta4–8Hz,lowgamma30–50Hz)mayprovidesomeof

therelevantmechanismstoparsecontinuousspeechinto

the necessary chunks for decoding [59,60–63]. To

achieveparsingofanaturalisticinputsignal(e.g.speech

signal) into elementary pieces, one ‘mesoscopic-level’

mechanism issuggested to bethe slidingand resetting

of temporal windows,implemented asphase locking of

Figure3 Amplitude F requency (Hz) Time (s) 1.2 5 10 20 30 40 1.6 2.0 2.4 1.2 1.6 2.0 2.4 1.2 1.6 –6 6 μV 6 μV 2 4 6 –6 2 4 6 [sec] [sec] 2.0 2.4 0.05 0.15 20 40 60 CACoh fT Chθ M100 0.1

Attend same speech token

Neural Recording Stimulus Envelope

Stimulus Envelope MEG Recording

Attend different speech tokens Cocktail Party (d)

(b)

(a) (c)

Current Opinion in Neurobiology

(a)Entrainmentofcorticalactivitytospeechenvelope,from[65].(b)Spectrogramsofstimulusandneuralactivityillustratecorrespondencein frequencydomain.(c)Contourmapsoftwoconditionsshowtypicalauditorydistribution.(d)Attentiondependentprocessingofacomplexspeech signalwithtwosimultaneousspeakers[69].Stimulusissamemixtureoftwovoices.Uppertrace:attendtosamespeaker/streamondifferenttrials. Lowertrace:attendtodifferentspeaker/streamondifferenttrials.

(6)

low-frequency (delta, theta) activity to theenvelope of

speechandphaseresettingoftheintrinsicoscillationson

privileged timescales[58,64].Thesuccessfulresetting

ofneuronalactivity,triggeredinpartbystimulus-driven

spikes,providestime constants(or temporalintegration

windows)forparsinganddecodingspeechsignals.Recent

studies linktheinfrastructure provided byneural

oscil-lations(whichreflectneuronalexcitabilitycycles)tobasic

perceptual challenges in speech recognition, such as

breaking thecontinuousinput stream intochunks

suit-ablefor subsequentanalysis[59,63,65].

Two recent studies serve to illustrate the logic of the

research program. Figure3a[65]shows thedatafroma

magnetoencephalography (MEG) recording in which

subjects listened to continuous regular or manipulated

speech.Thedatashowtheclosealignmentbetweenthe

stimulusenvelopeandtheresponsefromauditorycortex,

ashasalsobeenshowninotherrecentstudies[59,62,63].

Howthisalignmentbetweenspeechacousticsandneural

oscillations might underpin intelligibility has been

unclear.Thisstudytestedthehypothesisthatthe

‘sharp-ness’ oftemporalfluctuationsinthecrucialband

envel-ope was a temporal cue to syllabic rate, driving the

intrinsic delta or theta rhythms to track the stimulus

and thereby facilitating intelligibility. It was observed

that ‘sharp’ events in thestimulus (i.e. auditory edges)

causecorticalrhythmstore-alignandparsethestimulus

into syllable-sized chunks for further decoding. Using

MEGrecordingsitwasshownthatbyremovingtemporal

fluctuations that occur at the syllabic rate,

envelope-tracking activityiscompromised.By artificially

reinstat-ingsuchtemporalfluctuations,envelope-trackingactivity

isregained.Crucially,changesin trackingcorrelatewith

stimulus intelligibility. These results suggest that the

sharpnessof stimulusedges,asreflectedin thecochlear

output,driveoscillatoryactivitytotrackandentraintothe

stimulus,atitssyllabicrate.Thisprocessfacilitates

par-singofthestimulusintomeaningfulchunksappropriate

for subsequentdecoding.

Ifneuronalactivitylocksorentrainstostimulusfeatures,

is this process subject to the vagaries of naturalistic

communication,with wanderingattention,for example?

Severalrecent studieshave testedthis [66,67–69]; one

result is shown in Figure 3b [69]. This ECog study

investigated the manner in which speech streams are

represented in brain activityand theway thatselective

attentiongovernstherepresentationofspeechusingthe

‘CocktailParty’paradigm.Itisshownthatbrainactivity

dynamically tracks speech streams using both

low-fre-quency (delta, theta) phase and high-frequency (high

gamma) amplitude fluctuations, and it is argued that

optimalencodinglikelycombinesthetwo—inthespirit

ofmulti-timescaleprocessing.Moreover,inandnear

low-level auditory cortices, attention modulates the

repres-entation by enhancing cortical tracking of attended

speechstreams(Figure3b).Yetignoredspeechremains

represented. Jointly, these studies demonstrate the

potential role thatneuronaloscillationsmayplayin the

parsing, decoding, and attending to naturalistic speech

[18].

Conclusion

Therehas,ofcourse,beenexcitingprogressinothermore

neuro-computational studies of language, including

speech production [70], lexical representation and

pro-cessing[(10)],andpredictivecoding[71].This

perspect-ive is necessarily brief and selective. It is fair to say,

however,thatinthelast10–15years,themodelofhow

languageisprocessedinthebrainhaschanged

dramatic-ally from the classical perspective developed between

1861(Broca),1874(Wernicke),1885(Lichtheim),andthe

1970s(Geschwind).Thesechangesareaconsequenceof

amorematurelinguistics,psychology,andneuroscience.

Itisimportanttoacknowledgetheimmensecontribution

tobasicandclinicalneurosciencethattheclassicalmodel

hasmade—buttoacknowledgeaswellthatitcannotbe

carriedforthasarealisticviewonlanguageandthebrain.

Conflict

of

interest

Thereisnoconflictof interest.

Acknowledgments

DPissupportedbyNIH2R01DC05660,NSFBSC-1344285,andArmy

ResearchOfficeMURIARO54228-LS-MUR.

References

and

recommended

reading

Papersofparticularinterest,publishedwithintheperiodofreview, havebeenhighlightedas:

 ofspecialinterest ofoutstandinginterest

1. HickokG,PoeppelD:Thecorticalorganizationofspeech processing.NatRevNeurosci2007,8:393-402.

2. FriedericiAD:Thebrainbasisoflanguageprocessing:from structuretofunction.PhysiolRev2011,91:1357-1392.

3. Pulvermu¨llerF,FadigaL:Activeperception:sensorimotor circuitsasacorticalbasisforlanguage.NatRevNeurosci2010, 11:351-360.

4. PoeppelD,EmbickD:Definingtherelationbetweenlinguistics andneuroscience.In Twenty-firstCenturyPsycholinguistics: FourCornerstones.EditedbyCutlerA. LawrenceErlbaum;2005: 103-120.

5. ScharingerM,MerickelJ,RileyJ,IdsardiWJ:Neuromagnetic evidenceforafeaturaldistinctionofEnglishconsonants: sensor-andsource-spacedata.BrainLang2011,116:71-82.

6. 

MesgaraniN,CheungC,JohnsonK,ChangEF:Phoneticfeature encodinginhumansuperiortemporalgyrus.Science2014, 343:1006-1010.

ECogdatashowthesensitivityandspecificityprofilesofrecordingsover leftSTG.Bothacousticpatternsandlinguistic(distinctivefeatural) pat-terns are observed, consistent with the view that features (defining articulation)haveacousticcorrelates.

7. FiorentinoR,Naito-BillenY,BostJ,Fund-ReznicekE: Electrophysiologicalevidenceforthemorpheme-based combinatoricprocessingofEnglishcompounds.Cogn Neuropsychol2013.PMID:24279696.

8. AlmeidaD,PoeppelD:Word-specificrepetitioneffects revealedbyMEGandtheimplicationsforlexicalaccess.Brain Lang2013,127:497-509.

(7)

9. BozicM,TylerLK,SuL,WingfieldC,Marslen-WilsonWD: Neurobiologicalsystemsforlexicalrepresentationand analysisinEnglish.JCognNeurosci2013,25:1678-1691.

10. EttingerA,LinzenT,MarantzA:Theroleofmorphologyin phonemeprediction:evidencefromMEG.BrainLang2014, 129:14-23.

11. CrepaldiD,BerlingeriM,CattinelliI,BorgheseNA,LuzzattiC, PaulesuE:Clusteringthelexiconinthebrain:ameta-analysis oftheneurofunctionalevidenceonnounandverbprocessing. FrontHumNeurosci2013,7:303.

12. FedorenkoE,Nieto-Castan˜o´nA,KanwisherN:Syntactic processinginthehumanbrain:whatweknow,whatwedon’t know,andasuggestionforhowtoproceed.BrainLang2012, 120:187-207.

13. MoroA:Onthesimilaritybetweensyntaxandactions.Trends CognSci2014,18:109-110.

14. 

BemisDK,Pylkka¨nenL:Basiclinguisticcompositionrecruits theleftanteriortemporallobeandleftangulargyrusduring bothlisteningandreading.CerebCortex2013,23:1859-1873.

Buildingonalineofresearchinvestigatinghowtheelementarystructural andsemanticcompositionoflinguisticpiecesisaccomplished,thiswork highlights where and when basic linguistic compositionis executed acrosslanguagetasks.

15. WesterlundM,Pylkka¨nenL:Theroleoftheleftanteriortemporal lobeinsemanticcompositionvs.semanticmemory. Neuropsychologia2014,57:59-70.

16. HagoortP,vanBerkumJ:Beyondthesentencegiven.Philos TransRSocLondBBiolSci2007,362:801-811.

17. XuJ,KemenyS,ParkG,FrattaliC,BraunA:Languageincontext: emergentfeaturesofword,sentence,andnarrative comprehension.Neuroimage2005,25:1002-1015.

18. ZionGolumbicEM,PoeppelD,SchroederCE:Temporalcontextin speechprocessingandattentionalstreamselection:a behavioralandneuralperspective.BrainLang2012,122:151-161.

19. LiX,YangY:Howlong-termmemoryandaccentuationinteract duringspokenlanguagecomprehension.Neuropsychologia 2013,51:967-978.

20. KotzSA,PaulmannS:Whenemotionalprosodyandsemantics dancecheektocheek:ERPevidence.BrainRes2007, 1151:107-118.

21. JanuaryD,TrueswellJC,Thompson-SchillSL:Co-localizationof stroopandsyntacticambiguityresolutioninBroca’sarea: Implicationsfortheneuralbasisofsentenceprocessing.J CognNeurosci2009,21:2434-2444.

22. PoeppelD,IdsardiWJ,VanWassenhoveV:Speechperceptionat theinterfaceofneurobiologyandlinguistics.PhilosTransRoy SocB:BiolSci2008,363:1071.

23. BeemanMJ,BowdenEM,GernsbacherMA:Rightandleft hemispherecooperationfordrawingpredictiveand coherenceinferencesduringnormalstorycomprehension. BrainLang2000,71:310-336.

24. FedermeierKD:Thinkingahead:theroleandrootsof predictioninlanguagecomprehension.Psychophysiology 2007,44:491-505.

25. ObermeierC,MenninghausW,vonKoppenfelsM,RaettigT, Schmidt-KassowM,OtterbeinS,KotzSA:Aestheticand emotionaleffectsofmeterandrhymeinpoetry.FrontPsychol 2013,4:10.

26. PoeppelD:Themapsproblemandthemappingproblem:two challengesforacognitiveneuroscienceofspeechand language.CognNeuropsychol2012,29:34-55.

27. PriceCJ:Areviewandsynthesisofthefirst20yearsofPETand fmristudiesofheardspeech,spokenlanguageandreading. Neuroimage2012,62:816-847.

28. VigneauM,BeaucousinV,Herve´ PY,DuffauH,CrivelloF,Houde´ O etal.:Meta-analyzinglefthemispherelanguageareas: phonology,semantics,andsentenceprocessing.Neuroimage 2006,30:1414-1432.

29. VigneauM,BeaucousinV,Herve´ PY,JobardG,PetitL,CrivelloF etal.:Whatisright-hemispherecontributiontophonological, lexico-semantic,andsentenceprocessing?Insightsfroma meta-analysis.Neuroimage2011,54:577-593.

30. PoeppelD:Pureworddeafnessandthebilateralprocessingof thespeechcode.CognSci2001,25:679-693.

31. HickokG,PoeppelD:Towardsafunctionalneuroanatomyof speechperception.TrendsCognSci2000,4:131-138.

32. BozicM,TylerLK,IvesDT,RandallB,Marslen-WilsonWD: Bihemisphericfoundationsforhumanspeechcomprehension. ProcNatlAcadSciUSA2010,107:17439-17444.

33. TurkeltaubPE,CoslettHB:Localizationofsublexicalspeech perceptioncomponents.BrainLang2010,114:1-15.

34. BinderJR,FrostJA,HammekeTA,BellgowanPS,SpringerJA, KaufmanJN,PossingET:Humantemporallobeactivation byspeechandnonspeechsounds.CerebCortex2000, 10:512-528.

35. MorillonB,LehongreK,FrackowiakRS,DucorpsA,

KleinschmidtA,PoeppelD,GiraudAL:Neurophysiologicalorigin ofhumanbrainasymmetryforspeechandlanguage.ProcNatl AcadSciUSA2010,107:18688-18693.

36. 

MorillonB,Lie´geois-ChauvelC,ArnalLH,Be´narCG,GiraudAL: Asymmetricfunctionofthetaandgammaactivityinsyllable processing:anintra-corticalstudy.FrontPsychol2012,3:248.

Thisstudycapitalizesonrarebilateralintracorticalrecordingsinapatient totestaspecificcomputationalhypothesisaboutasymmetryinspeech perception.

37. 

CoganGB,ThesenT,CarlsonC,DoyleW,DevinskyO,PesaranB: Sensory-motortransformationsforspeechoccurbilaterally. Nature2014,507:94-98.

TheECogdatapresenteddemonstrate,amongarangeoffindings,that thesensory-motorcoordinatetransformationsthatunderliethemapping fromsoundstoarticulationsarebilaterallymediated,againstthe prevail-ingdogmathatdorsalstreamsensory-motorfunctionsarehighly later-alized.

38. WlotkoEW,FedermeierKD:Twosidesofmeaning:the scalp-recordedn400reflectsdistinctcontributionsfromthecerebral hemispheres.FrontPsychol2013,4:181.

39. PoeppelD:Theanalysisofspeechindifferenttemporal integrationwindows:cerebrallateralizationas‘asymmetric samplingintime’.SpeechCommun2003,41:245-255.

40. McGettiganC,ScottSK:Corticalasymmetriesinspeech perception:what’swrong,what’srightandwhat’sleft?Trends CognSci2012,16:269-276.

41. ScottSK,McGettiganC:Dotemporalprocessesunderlieleft hemispheredominanceinspeechperception? BrainLang 2013,127:36-45.

42. GiraudAL,KleinschmidtA,PoeppelD,LundTE,FrackowiakRS, LaufsH:Endogenouscorticalrhythmsdeterminecerebral specializationforspeechperceptionandproduction.Neuron 2007,56:1127-1134.

43. HickokG,PoeppelD:Dorsalandventralstreams:aframework forunderstandingaspectsofthefunctionalanatomyof language.Cognition2004,92:67-99.

44. RauscheckerJP,ScottSK:Mapsandstreamsintheauditory cortex:nonhumanprimatesilluminatehumanspeech processing.NatNeurosci2009,12:718-724.

45. GowDW:Thecorticalorganizationoflexicalknowledge:adual lexiconmodelofspokenlanguageprocessing.BrainLang 2012,121:273-288.

46. LauEF,PhillipsC,PoeppelD:Acorticalnetworkforsemantics: (De)constructingtheN400.NatRevNeurosci2008,9:920-933.

47. AnwanderA,TittgemeyerM,vonCramonDY,FriedericiAD, Kno¨scheTR:Connectivity-basedparcellationofBroca’sarea. CerebCortex2007,17:816-825.

48. 

AmuntsK,LenzenM,FriedericiAD,SchleicherA,MorosanP, Palomero-GallagherN,ZillesK:Broca’sregion:novel

(8)

organizationalprinciplesandmultiplereceptormapping.PLoS Biol2010,8.

TheanatomicanalysisofBroca’sregionsummarizedheredemonstrates thecomplexityoftheregionthathasnotbeensufficientlyappreciatedin functionalstudies.Thestate-of-the-artmapofBroca’sregioninvitesa muchmoregranularviewofthecomputationsexecutedthere. 49. FedorenkoE,DuncanJ,KanwisherN:Language-selectiveand

domain-generalregionsliesidebysidewithinBroca’sarea. CurrBiol2012,22:2059-2062.

50. 

TurkenAU,DronkersNF:Theneuralarchitectureofthe languagecomprehensionnetwork:convergingevidencefrom lesionandconnectivityanalyses.FrontSystNeurosci2011,5:1.

An extensive connectivity study of several left-hemisphere temporal languageregions.TheleftMTGshowsaveryextensivestructuraland functionalconnectivitypattern,consistentwiththelanguageimpairments associatedwithposteriorMTGlesions.Thesedatareinforceanessential roleforthisregioninlanguageprocessing,probablyatthelexicallevel. 51. DronkersNF,WilkinsDP,VanValinRD,RedfernBB,JaegerJJ:

Lesionanalysisofthebrainareasinvolvedinlanguage comprehension.Cognition2004,92:145-177.

52. LewisG,PoeppelD:Theroleofvisualrepresentationsduring thelexicalaccessofspokenwords.BrainLang2014,134:1-10:

http://dx.doi.org/10.1016/j.bandl.2014.03.008.

53. 

PallierC,DevauchelleAD,DehaeneS:Corticalrepresentationof theconstituentstructureofsentences.ProcNatlAcadSciUSA 2011,108:2522-2527.

The parametricfMRIexperiment describedtests how local syntactic structures ofdifferent sizes,i.e.constituentstructures, arebuilt. The resultssuggestasegregationintoareassensitivetostructuralfeatures alone as wells complementary areas that exploit structure-meaning pairings.

54. TylerLK,CheungTP,DevereuxBJ,ClarkeA:Syntactic computationsinthelanguagenetwork:characterizing dynamicnetworkpropertiesusingrepresentationalsimilarity analysis.FrontPsychol2013,4:271.

55. ObleserJ,LahiriA,EulitzC:Magneticbrainresponsemirrors extractionofphonologicalfeaturesfromspokenvowels.J CognNeurosci2004,16:31-39.

56. PhillipsC,PellathyT,MarantzA,YellinE,WexlerK,PoeppelD etal.:Auditorycortexaccessesphonologicalcategories:an MEGmismatchstudy.JCognNeurosci2000,12:1038-1055.

57. 

DeWittI,RauscheckerJP:Phonemeandwordrecognitioninthe auditoryventralstream.ProcNatlAcadSciUSA2012,109:505-514.

Thismeta-analysisofimagingdataarguesforaleftlateralized,two-stage modelofspeechandauditoryword-formrecognition,withtheanalysisof

phonemes occurring in mid-STG and word recognition occurring in

anterior STG.Theopens anewdebate,identifying aregion (anterior STG)typicallynotimplicatedinwordrecognition.

58. GiraudAL,PoeppelD:Corticaloscillationsandspeech processing:emergingcomputationalprinciplesand operations.NatNeurosci2012,15:511-517.

59. 

LuoH,PoeppelD:Phasepatternsofneuronalresponses reliablydiscriminatespeechinhumanauditorycortex.Neuron 2007,54:1001-1010.

Thisisthefirstinaseriesofpapersfromdifferentlabsthatarguesthat corticaloscillationsentraintocontinuousspeech.Inparticular,theroleof thephasepatternofthetabandoscillationsisimplicated.

60. LuoH,LiuZ,PoeppelD:Auditorycortextracksbothauditory andvisualstimulusdynamicsusinglow-frequencyneuronal phasemodulation.PLoSBiol2010,8(8):e1000445.

61. LuoH,PoeppelD:Corticaloscillationsinauditoryperception andspeech:evidencefortwotemporalwindowsinhuman auditorycortex.FrontPsychol2012,3:170.

62. LuoH,TianX,SongK,ZhouK,PoeppelD:Neuralresponse phasetrackshowlistenerslearnnewacoustic

representations.CurrBiol2013,23:968-974.

63. PeelleJE,GrossJ,DavisMH:Phase-lockedresponsesto speechinhumanauditorycortexareenhancedduring comprehension.CerebCortex2013,23:1378-1387.

64. 

GhitzaO:Linkingspeechperceptionandneurophysiology: speechdecodingguidedbycascadedoscillatorslockedtothe inputrhythm.FrontPsychol2011,2:130.

A computationalmodel ispresentedthat outlineshowlow-frequency corticaloscillations,mainlyinthethetaband,canbeexploitedtoparsean input signalintochunksthat aresubsequentlydecodedusing higher frequencybetaandgammaoscillations.

65. DoellingKB,ArnalLH,GhitzaO,PoeppelD:Acousticlandmarks drivedelta-thetaoscillationstoenablespeech

comprehensionbyfacilitatingperceptualparsing.Neuroimage 2014,85:761-768.

66. 

DingN,SimonJZ:Emergenceofneuralencodingofauditory objectswhilelisteningtocompetingspeakers.ProcNatlAcad SciUSA2012,109:11854-11859.

ThedatafromthisMEGstudyindicatethatconcurrentlypresentauditory objects,evenifspectrotemporallyoverlappingandnotresolvableatthe auditoryperiphery(likeacocktailparty),areencodedindividuallyinthe auditorycortex.Moreover,eachstreamemergesasabasic representa-tionalunitsubjecttofortop-downattentionalmodulation.

67. MesgaraniN,ChangEF:Selectivecorticalrepresentationof attendedspeakerinmulti-talkerspeechperception.Nature 2012,485:233-236.

68. ZionGolumbicE,CoganGB,SchroederCE,PoeppelD:Visual inputenhancesselectivespeechenvelopetrackinginauditory cortexata‘‘cocktailparty’’.JNeurosci2013,33:1417-1426.

69. ZionGolumbicEM,DingN,BickelS,LakatosP,SchevonCA, McKhannGMetal.:Mechanismsunderlyingselectiveneuronal trackingofattendedspeechata‘‘cocktailparty’’.Neuron 2013,77:980-991.

70. HickokG:Computationalneuroanatomyofspeechproduction. NatRevNeurosci2012,13:135-145.

71. 

GagnepainP,HensonRN,DavisMH:Temporalpredictivecodes forspokenwordsinauditorycortex.CurrBiol2012,22:615-621.

Therather complex design ofthisMEG studyallows theauthors to investigate alternative models,testing therole oflexical competition versussegment-basedpredictioninauditorywordrecognition.Thelatter viewwins,providingstrongsupportforpredictivemodelsinlanguage processing.

72. SantoroR,MoerelM,DeMartinoF,GoebelR,UgurbilK, YacoubE,FormisanoE:Encodingofnaturalsoundsatmultiple spectralandtemporalresolutionsinthehumanauditory cortex.PLoSComputBiol2014,10(1):e1003412http:// dx.doi.org/10.1371/journal.pcbi.1003412.

Imagem

Figure 2 prcs cs prcs prcsprcs AF direct(a)(b) AF indirect ILFop9op8op6op444v6r145a45pifs1ifs2ifj1ifj26v26v14csdsdsababhbhbifsifscscs44d IOFF MdLF Tapetum
Figure 3 Amplitude Frequency (Hz) Time (s)1.25102030401.62.02.4 1.2 1.6 2.0 2.41.21.6 –66 μ V 6 μV 2 4 6 –6 2 4 6 [sec][sec]2.02.4 0.050.15204060CACohfTChθM1000.1

Referências

Documentos relacionados

Resumo: Foi relatada a predação de Arthrobotrys dactyloides, fungo que forma armadilhas, sobre Pratylenchus coffeae, nematoide parasita de plantas e agente causal da podridão seca do

Moreover, these results serve as a warning for rheumatologists and speech-language pathologists in their monitoring of SSc patients on the need to conducted Brainstem Auditory Evoked

Perceptual development – visual, auditory and speech perception in infancy.. Hove, Reino Unido:

No Brasil, os chamados sistemas de prepare reduzido e prepare minima tem se caracterizado pela aplicaQao de teonicas onde sao empregadas principalmente maquinas

the development of speech and language and the role of auditory screening in the early detection of hearing loss, and the relevance of showing up in the scheduled date to have

These abilities include sound localization, speech recognition in noise, auditory performance with degraded or distorted acoustic speech signals, auditory performance in

Objective: To investigate complex acoustic signals (speech) encoded in the auditory nervous system of children with specific language impairment and compare with children with

Since the speech stimulus is a complex signal and its presence signals speech processing in the cortex, this study aimed to compare the results of Long Latency Auditory Evoked