Conjoining auditory and visual features during high-rate serial presentation: Processing and conjoining two features can be faster than processing one

David L Woods, Claude Alain, Keith H. Ogawa

Research output: Contribution to journalArticle

34 Scopus citations

Abstract

The time required to conjoin stimulus features in high-rate serial presentation tasks was estimated in auditory and visual modalities. In the visual experiment, targets were defined by color, orientation, or the conjunction of color and orientation features. Responses were fastest in color conditions, intermediate in orientation conditions, and slowest in conjunction conditions. Estimates of feature conjunction time (FCT) were derived on the basis of a model in which features were processed in parallel and then conjoined, permitting FCTs to be estimated from the difference in reaction times between conjunction and the slowest single-feature condition. Visual FCTs averaged 17 msec, but were negative for certain stimuli and subjects. In the auditory experiment, targets were defined by frequency, location, or the conjunction of frequency and location features. Responses were fastest in frequency conditions, but were faster in conjunction than in location conditions, yielding negative FCTs. The results from both experiments suggest that the processing of stimulus features occurs interactively during early stages of feature conjunction.

Original languageEnglish (US)
Pages (from-to)239-249
Number of pages11
JournalPerception and Psychophysics
Volume60
Issue number2
StatePublished - Feb 1998

    Fingerprint

ASJC Scopus subject areas

  • Psychology(all)
  • Experimental and Cognitive Psychology

Cite this