Avoiding or intercepting looming objects implies a precise estimate of both time until contact and impact location. In natural situations, extrapolating a movement trajectory relative to some egocentric landmark requires taking into account variations in retinal input associated with moment-to-moment changes in body posture. Here, human observers predicted the impact location on their face of an approaching stimulus mounted on a robotic arm, while we systematically manipulated the relation between eye, head, and trunk orientation. The projected impact point on the observer's face was estimated most accurately when the target originated from a location aligned with both the head and eye axes. Eccentric targets with respect to either axis resulted in a systematic perceptual bias ipsilateral to the trajectory's origin. We conclude that (1) predicting the impact point of a looming target requires combining retinal information with eye position information, (2) that this computation is accomplished accurately for some, but not all, possible combinations of these cues, (3) that the representation of looming trajectories is not formed in a single, canonical reference frame, and (4) that the observed perceptual biases could reflect an automatic adaptation for interceptive/defensive actions within near peripersonal space.