The Morphable Face Model captures the variations of 3D shape and texture that occur among human faces. It represents each face by a set of model coefficients, and generates new, natural-looking faces from any novel set of coefficients, which is useful in a wide range of applications in computer vision and computer graphics. The Morphable Face Model is derived from a data set of 3D face models by automatically establishing point-to-point correspondence between the examples, and transforming their shapes and textures into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. In this framework, it is easy to control complex facial attributes, such as gender, attractiveness, body weight, or facial expressions. Attributes are automatically learned from a set of faces rated by the user, and can then be applied to classify and manipulate new faces. Given a single photograph of a face, we can estimate its 3D shape, its orientation in space and the illumination conditions in the scene. Starting from a rough estimate of size, orientation and illumination, our algorithm optimizes these parameters along with the face’s internal shape and surface colour to find the best match to the input image. The face model extracted from the image can be rotated and manipulated in 3D. Presentation at SIGGRAPH 99, in collaboration with Thomas Vetter