Multisensory Integration in Visual Pattern Recognition

DSpace Repository

Show simple item record

dc.contributor.advisor Sekuler, Robert Aizenman, Avigael 2013-06-21T21:12:01Z 2013-06-21T21:12:01Z 2013
dc.description.abstract Humans’ talent for pattern recognition is made possible by a close partnership between perception and memory. Building on my previous work (Aizenman, Bond, Sekuler, & Gold, 2012; Gold, Aizenman, Bond, & Sekuler, under review), this thesis presents two studies that explore the role of multisensory integration in that partnership. Recently, Michalka, Rosen, Kong, Shinn-Cunningham, and Somers (2012) showed that a task involving rapid presentation of visual stimuli can activate cortical regions normally implicated in auditory attention. As music training is known to fine tune the human auditory system (Kraus & Chandrasekaran, 2010), I hypothesized that such training might improve performance on rapid temporal tasks, not only for auditory stimuli, but also for visual and multisensory stimuli as well. To test this hypothesis, I presented subjects with eight-item long, randomly-generated sequences of luminances, tones, or both together, with either a correlated or non-correlated relationship. Each sequence of successive items was just one second long. Subjects judged whether or not the second four items in a stimulus sequence were an identical repetition of the first four items presented or not. Performance was expressed as values of d'. Four trial types were presented in separate blocks: Auditory alone, Visual alone, AV-congruent (luminance sequences accompanied by auditory tones whose frequencies were cross-modally matched to the luminances), and AV-incongruent (luminance sequences accompanied by randomly generated, incongruent tones). For both types of AV trials, subjects were instructed to base their judgments on the luminances alone, ignoring the tones when deciding whether the luminance sequence repeated. Unknown to subjects a fixed repeating (fixRN) exemplar reoccured multiple times throughout a block of trials for all stimuli types. It was predicted that subjects would develop a memory for this exemplar which would express itself as improved performance with the fixRN exemplar. Subjects returned to the lab for a second session approximately 24 hours later, and were shown the exact same stimuli in the same counterbalanced order as the previous day. Fourteen subjects with music training (6-15 years) and fourteen subjects with minimal music training (<3 years) were tested. Overall, performance was best with the auditory stimuli, and learning for the fixRN exemplar was most robust with the AV-incongruent exemplar. There were no differences between performance on day 1, and no differences seen with music-trained and non-music trained subjects. In order to examine whether the wide range of auditory frequencies used in Experiment One may have influenced the results, Experiment Two tested subjects with a reduced range of auditory frequencies. Overall, music-trained subjects outperformed non-music trained subjects (p <.01). For both groups, performance was significantly better on AV-congruent trials than on all other trial types. When auditory and visual sequences were in perceptual correspondence, subjects could exploit this correspondence to enhance judgments that were nominally visual. Our results are consistent with the hypothesis that music training improves performance with rapidly presented stimulus sequences, even for visual sequences.
dc.format.mimetype application/pdf
dc.language English
dc.language.iso eng
dc.publisher Brandeis University
dc.relation.ispartofseries Brandeis University Theses and Dissertations
dc.rights Copyright by Avigael Aizenman 2013
dc.title Multisensory Integration in Visual Pattern Recognition
dc.type Thesis
dc.contributor.department Department of Psychology BA Bachelors Psychology Brandeis University, College of Arts and Sciences

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search BIR


My Account