We measured the three-dimensional shape and two-dimensional surface reflectance contributions to human recognition of faces across viewpoint. We first divided laser scans of human heads into their two- and three-dimensional components. Next, we created shape-normalized faces by morphing the two-dimensional surface reflectance maps of each face onto the average three-dimensional head shape and reflectance-normalized faces by morphing the average two-dimensional surface reflectance map onto each three-dimensional head shape. Observers learned frontal images of the original, shape-normalized, or reflectance-normalized faces, and were asked to recognize the faces from viewpoint changes of 0, 30 and 60 degrees. Both the three-dimensional shape and two-dimensional surface reflectance information contributed substantially to human recognition performance, thus constraining theories of face representation to include both types of information.