The present study investigates the computational processes involved in the transformation of visual information into motor commands for the control of saccadic and reaching movements. This transformation requires the processing of intermediate spatial representations that combine visual information with signals coding postural variables, such as the eye, head and arm position. To program a reaching movement, the brain has to transform retinal coordinates into coordinates centered on the effector. The model's intermediate layer makes use of units that compute basis functions of the input variables to generate spatial representations, which in turn drive the motor planning. The model's output codes the motor error required to foveate or reach visual targets. Our simulations show that the model is both biologically plausible and computationally efficient.