Analyzing human faces is a traditional topic in computer vision research. For this task, model based approaches have been proven adequate to extract high-level information in many applications. However, they require a robust estimation of model parameters to work reliably. To tackle this challenge, we train displacement experts that serve as an update function on initial model parameter configurations. Unfortunately, building displacement experts that work robustly even in unconstrained environments is a non-trivial task. Therefore, we rely on a priori information about the structure of human faces by integrating an image representation that reflects the location of several facial components, so called “multi-band images”. By combining multi-band images and learned displacement experts, we propose a novel face model fitting approach. An evaluation on the “Labeled Faces In The Wild” database demonstrates, that this approach provides robust fitting results even in unconstrained environments.
image understanding face model fitting machine learning