INTRODUCTION
All adaptive behaviors have some kind of goal, even when that goal is abstract
and/or far off in space and time, e.g., saving up money to put the kids through
university. Here, we consider goal-directed behaviors in the simplest, most
literal sense: movements aimed toward some goal in three-dimensional (3-D)
space, immediately or after a short delay. In particular, we will focus on the
early spatial transformations associated with goal-directed gaze and hand
movements.
As neuroscientists our ultimate goal is to describe how the brain implements
these behaviors, but here we adopt the viewpoint that this topic cannot be
approached without first having a firm understanding of the behavior itself.
This extends beyond consideration of 3D location the end-point effecter – such
as the location of the finger-tip— to the underlying multi-dimensional geometry
of the systems that house the senses and control the effectors. Sometimes one
would like to ignore these details, but the brain has no such luxury. As we
shall see, the devil is in the details, in the sense that these details place
important constraints on brain function.
Further, we also focus this review on the primate system, where possible the
human species. Animal models have laid the modern foundations for this field
and remain necessary to push the detailed boundaries of our knowledge, but in
the past 10 years animal neurophysiology has been paralleled by equally
important human experiments using technologies such as fMRI, TMS, and MEG. These
technologies allow one to confirm the known animal physiology in humans,
sometimes reveal interspecies differences, occasionally push our knowledge of
basic function forward, and often bridge the gap from basic function to
clinically observed deficits.
Finally, although no equations will appear in this review, we will take a
computational perspective, approaching the subject of sensorimotor
transformations as a series of computational problems, organized roughly in
terms of the order that one would encounter these problems in a feed-forward
transformation. From this perspective –although gaze and hand movements
obviously have their differences both in terms of biomechanics and neural
control- we might hope to reveal certain universal elements of goal-directed
movement. For example, as we shall see, certain aspects of visuospatial memory
that were once assumed to be the province of vision and eye movements have been
shown to apply equally well to the reach system.
Before reviewing the details of how the brain might perform spatial
transformations for hand and gaze control, we need to establish some background
vocabulary: first in the general language of spatial representation and
transformation, next in the basic spatial properties of our model systems, and
finally in the language of cortical and sub-cortical physiology.