Date and Time: July 9th, 10:00-13:30
Psychophysical experiments are a widespread tool to investigate the sensory performance of animals or human observers and may allow to infer properties of the underlying brain processes. For example, masking experiments operating on the brink of the spatial and temporal resolution of the visual system yield constraints on neural signal processing, and the perception of ambiguous figures elucidates dynamical properties of neural systems. In this workshop, we want to discuss cortical models for psychophysical phenomena and the role which the combination of psychophysics and modelling plays for an understanding of the brain processes underlying perception and cognition. We want to address the following questions:
Schedule and List of speakers:
|Stephen Macknik||Wakes and Spokes: Two illusions that constrain models of long-range connections in the visual system|
|Odelia Schwartz||Information integration: Examining a psychophysical model in visual cortex|
|Lars Schwabe||Modeling perceptual learning: A promising paradigm to infer cortical organization principles from visual psychophysics?|
|Boris Pastukhov||Attention and pattern motion|
|Simon Thorpe||Using temporal constraints from psychophysics to constrain models of cortical processing|
|Udo Ernst||Local interactions in a structurally simple neural network explain global effects in Gestalt processing and masking|
Wakes and Spokes: Two illusions that constrain models of
long-range connections in the visual system
Stephen Macknik (UCL, Institute of Ophthalmology)
Inhibitory effects responsible for Mach Bands, and other illusions of brightness and visibility are often thought of as local effects controlled by short-distance circuits such as lateral inhibition. Here we will discuss some illusions called "Wakes and Spokes" which suggest that long distance effects with certain parameters that appear to be under short-distance circuit control at first, but are revealed to have very long-distance underpinnings. We will see several dynamic demos of these effects.
Modeling perceptual learning: A promising paradigm to infer
cortical organization principles from visual psychophysics?
Lars Schwabe (TU Berlin, Department of Electrical Engineering and Computer Science)
In general, one assumes that the visual system is computing representations of the `outside' world which serve as the basis for an animals behavior. The early visual system has been investigated for many years in order to determine its structure and function. From a computational point of view, however, at least two main questions are still open, i. e. (i) how to conceptualize the computations in, e. g., the primary visual cortex in addition to a pure 'mechanistic' level of description, and (ii) how is the information about the environment encoded in neuronal firing patterns. Here we hypothesize that combining modeling and visual psychophysics may serve as a promising paradigm to infer at least partly answers to these questions. In this talk we will first briefly summarize recent results from psychophysical studies of perceptual learning (1,2). Then we will consider a simple computational model for the perceptual learning of an orientation discrimination task. Finally, we discuss how modeling, visual psychophysics and the already available knowledge about cortical circuits and their plasticity may be combined to infer organization principles of the visual cortex and their underlying mechanisms.
(1) Watanabe et al., Nature Neuroscience, 2002.
(2) Seitz and Watanabe, Nature, 2003.
Attention and Pattern Motion
Aleksandr Pastukhov (University of Plymouth)
Attention is known to facilitate divisive inhibition for static patterns. We are using dual task experiments to study role of attention for motion. Model of cortical responses to motion (Simoncelli & Heeger, 1998) is used to predict actual responses for different types of moving patterns. Stimulus is moving pattern of log Gabors patches to optimally match tuning of motion sensitive cells. Dual task experiments allow us to control subject's attention and study abilities to discriminate motion direction with and without attention. To further explore role of attention in motion perception we are also using superimposed transparent patterns with "sliding" and "sticking" behavior.
Information integration: examining a psychophysical
model in visual cortex
Odelia Schwartz, Javier R. Movellan, Thomas Wachtler, Thomas D. Albright, and Terrence J. Sejnowski (Salk Institute)
We examine a particular model that has been implicated for perceptual combination of information across multiple sources. Specifically, Morton and Massaro suggested that the response category given two sources of information, can be factorized into components selectively determined by each of the sources. Here we focus on visual spatial integration of color in monkey primary visual cortex (area V1). We show that the probability of spike counts given center and surround color stimuli, can be well fit by a model that factorizes into a component selectively determined by the center, and a component selectively determined by the surround. This line of work suggests a computational principle for information integration in the brain, and we pose issues for further testing its applicability.
Using temporal constraints from psychophysics
to constrain models of cortical processing
Simon Thorpe (Centre de Recherche Cerveau et Cognition, Toulouse)
Jerry Feldman's 100-step limit was one of the motivations behind the development of PDP and connectionist models in the early 1980s. He argued that since high level perceptual decisions can be made in about half a second, and since the time between spikes is rarely less than 5 ms, the underlying computations must involve no more than about 100 (massively parallel) steps. In the last 15 years, experimental work has imposed more and more severe constraints on the underlying computations. I will discuss our recent work on ultra-rapid scene processing using eye movements that shows that sophisticated behavioural responses can be generated in under 150 ms in humans. Few current models of cortical processing are capable of explaining such results. One option, implemented in SpikeNet a real-time image processing system we have developed, uses just the first few percent of neurons to fire at each stage of the visual system. Our simulations suggest that computations based on such a scheme could indeed operate fast enough to be consistent with the experimentally imposed temporal constraints.
Local interactions in a structurally simple neural network
explain global effects in Gestalt processing and masking
Udo Ernst, Axel Etzold, Michael H. Herzog, and Christian W. Eurich (University of Bremen, Bremen)
We present results from a modeling study which aims at explaining various psychophysical masking experiments in a coherent theoretical framework. In the experiments, a target element is presented for a very short time, followed by masks in various spatial and temporal configurations. Detectability of the target strongly depends on those configurations, suggesting that the Gestalt properties of the mask rather than lower-order cues like overall stimulus luminance determine the observer's performances. In a structurally simple model with an architecture resembling cortical networks, we show that the phenomena of Gestalt processing and masking can solely be explained by the dynamics mediated by intracortical interactions. Moreover, the experiments allow to constrain the model parameters, and allow to identify the relevant time constants and length constants of neural architectures being able to reproduce the observed effects.
See also our webpage explaining and modeling the Shine-Through and Feature Inheritance effects.
CNS*03 paper preview "How ideally to Macaque Monkeys integrate contours?" by Ernst, Mandon, Pawelzik, and Kreiter.